
ChatGPT introduces lockdown mode and risk labels for safer AI
OpenAI has introduced Lockdown Mode and “Elevated Risk” labels in ChatGPT to reduce prompt injection risk as AI tools connect to the web and apps. For owners and heads of operations, it is a useful signal of where extra guardrails belong before you scale adoption.


AI is getting more useful, and that changes the security stakes
As AI systems take on more complex work, especially work that touches the web and connected apps, the risk profile shifts. OpenAI’s latest ChatGPT updates focus on one issue that is becoming hard to ignore: prompt injection, where a third party tries to manipulate an AI tool into following malicious instructions or exposing sensitive information.
Source: OpenAI announcement (link in Sources).
Two updates that make risk easier to manage
OpenAI introduced two related protections that matter for day to day business use.
First is Lockdown Mode, an optional advanced security setting aimed at higher risk users. The idea is simple: tightly constrain how ChatGPT can interact with external systems so there are fewer paths for data to leak during an attack.
Second is a consistent “Elevated Risk” label for a small set of capabilities across ChatGPT, ChatGPT Atlas, and Codex. These labels are designed to make it obvious when a feature could introduce extra security risk, particularly where network or app access is involved.
Lockdown mode is a practical control for high risk roles
Lockdown Mode is designed for users who are more likely targets, such as executives or security teams. Operationally, the most important point is that it is deterministic. Certain tools and capabilities are disabled in a predictable way to reduce the chance an attacker can exploit them.
One example OpenAI gives is browsing. In Lockdown Mode, browsing is limited to cached content so no live network requests leave OpenAI’s controlled network. That reduces the chance sensitive data could be exfiltrated through browsing.
For ops leaders, this is a clear pattern you can apply internally: not every user needs the same level of access. High privilege roles and high sensitivity workflows need stricter defaults.
OpenAI notes Lockdown Mode is available on certain business plans and can be enabled by admins through workspace settings using roles. Admins can also choose which connected apps and which actions inside those apps are available to users in Lockdown Mode.
“Elevated risk” labels help you spot the sharp edges
A common problem with AI rollouts is that risk is invisible until something goes wrong. “Elevated Risk” labels are meant to make those edges easier to see up front.
OpenAI’s example is Codex network access. If you let an AI tool take actions on the web, you are increasing capability, but you are also increasing the number of places data could flow. A consistent label, plus clear wording about what changes and when it is appropriate, makes it easier to standardise internal guidance.
For SMEs, this is valuable even if you do not use every OpenAI product mentioned. The bigger lesson is that AI vendors are starting to label risk at the feature level, not just at the platform level. That is where practical governance is heading.
What this means for owners and heads of operations
If you are responsible for delivery, compliance, and outcomes, these updates point to three operational realities.
Security is now a workflow choice, not a policy document. If teams use AI inside browsers, CRMs, ticketing tools, document stores, and finance systems, your controls need to exist where the work happens.
Not all AI features deserve the same trust level. Some features are low risk and high value, like drafting internal comms from approved templates. Others are higher risk, like giving an assistant broad access to connected apps or the open web. Clear labels make it easier to draw that line.
Prompt injection is not an IT only problem. It is a business process risk. If a tool can be tricked into pulling the wrong data, sending something externally, or misapplying instructions, that is an ops issue as much as it is a security issue.
Three actions you can take this week
1) Map where AI touches external systems
List the tools your teams use that connect AI to the web, email, files, or third party apps. Identify the workflows where staff might paste sensitive information, such as customer details, contracts, pricing, or internal performance data.
2) Set role based access rules for connected features
Decide which roles should have access to higher risk capabilities such as browsing, connected apps, or agent style automation. A simple starting point is: default to least access, then grant access based on a business case and a clear owner.
3) Add a “safe use” checklist to everyday ops
Create a short checklist for staff using AI in operational workflows. Include basics like verifying instructions that reference links, treating unexpected prompts as suspicious, and avoiding pasting sensitive data into tools or contexts that are not approved for it. Tie this to your actual systems, not generic advice.
A positive step, and a useful direction of travel
The standout here is not only the features themselves. It is the intent to make risk visible and controllable as AI becomes more connected. For SMEs, that is encouraging. It suggests leading platforms are starting to treat security and safety as product design, not a footnote.
If you want to pressure test your current AI usage, align roles to the right controls, and set guardrails without slowing teams down, book an AI advisory meeting.
Book an AI advisory meeting to review your AI controls
We will review your current tools, where prompt injection risk is most likely to show up, and what guardrails make sense for your workflows.

Deep Research just got more controllable – and that’s the point
AI research tools are only as useful as the sources they’re built on. If the inputs are noisy, biased, or outdated, the output looks confident but drifts away from what you actually need. That’s why the latest improvements to Deep Research in ChatGPT matter: they’re less about “more AI” and more about better governance–clearer sourcing, tighter focus, and easier accountability.

From chat to agents: why AI adoption is the real frontier
AI at work has moved fast. First it was chat — individuals experimenting with prompts in a browser. Then came custom assistants, shaped around roles and workflows. Now we’re entering the next phase: AI agents that don’t just respond, but plan, act, and work across systems. OpenAI’s newly announced Frontier platform is a clear signal of that shift. But while the technology is impressive, the most important story isn’t about models or tooling. It’s about adoption.
Subscribe to the gecco newsletter

