Beware of Agentic AI Security Risks

The launch of “OpenClaw“, an open-source artificial intelligence (AI) agent, has attracted significant attention since it supports a board range of AI functions and can operate independently.  However, unlike conversational AIs such as ChatGPT, OpenClaw functions as a proactive assistant – once permissions are granted, it controls applications, organizes files, processes emails, sorts data and executes coding workflows.  This poses heightened security risks that users should treat with serious caution.

The Office of the Privacy Commissioner for Personal Data (PCPD) has recently published a statement about the potential cybersecurity and personal data privacy risks related to OpenClaw and other agentic AI tools, as these AI applications may request extensive system permissions and operate autonomously.

Potential Risks include:
OpenClawAlerts

  • Unauthorized data access due to high levels of access granted,
  • Data leakage and accidental deletion of files,
  • System intrusion due to higher likelihood of vulnerabilities being discovered from these open-source tools and exploited by threat actors.

Recommended Good Practices on using Agentic AI tools:

DOs:

  1. Use the official and the latest version.
  2. Enforce isolation for the runtime environment.
  3. Strengthen network control to minimize Internet exposure.
  4. Grant only the minimum permission necessary.
  5. Install and use plugins or skills with caution.
  6. Guard against browser hijacking.
  7. Regularly check and update the official security patches.
  8. Enable detailed log auditing functions.

DON’Ts:

  1. Don’t use outdated or third-party mirror versions of AI tools.
  2. Don’t expose AI agent instances to the Internet.
  3. Don’t enable administrator accounts during deployment.
  4. Don’t install skill packs that require entering passwords.
  5. Don’t browse unverified websites.
  6. Don’t store keys in plaintext in environment valuables.

 

Users are reminded NOT to install OpenClaw nor its variants on machines connected to the campus network.  Besides, they should refrain from granting excessive permissions to any agentic AI tools and must NOT use these applications to process any personal data. The University will continue to monitor the situation and provide further guidance as necessary.

 

Reference:

 

Published on:  27 Mar 2026