All content
Insights
04 Feb 2026

Claude’s legal plugin rattles legal tech

Anthropic has launched a Legal plugin for Claude Cowork that aims to speed up contract review, NDA triage, and compliance workflows. The early market reaction signals a wider shift from generic chatbots to department-ready AI tools.

Written by
The gecco team

From generic chat to real legal workflows

Anthropic has launched a new Legal plugin for Claude Cowork, designed to speed up contract review, NDA triage, and compliance work for in-house teams. It is a clear sign that the market is shifting again. This is less about who has the best chatbot and more about who can ship useful, department-ready workflows.

That shift landed loudly. News coverage linked the launch to sharp falls across several legal and data services stocks. Whether you see that as market overreaction or a real signal, leaders should take note. The tools are getting specific, and they are landing inside day-to-day work.

What the Legal plugin actually does

The Legal plugin is built around concrete, repeatable commands. For example, it can review a contract clause by clause against a negotiation playbook, then flag issues using a traffic light system and suggest redlines. It can also pre-screen incoming NDAs to route them to standard approval, counsel review, or full review.

Other commands cover vendor agreement checks, contextual legal briefings, and templated responses for recurring requests such as data subject access and discovery holds. The important detail is that the plugin is designed to be configurable. It is meant to follow your organisation’s playbook and risk tolerances, not a generic internet standard.

It also highlights a growing pattern. These tools are increasingly built to connect with the systems you already use, so the AI can work with real context. In Claude’s world, that often means connecting through the Model Context Protocol, which is becoming a common standard for plugging AI into tools and data sources.

Why the market noticed this time

Legal work has always looked like a safe place to start with AI. There is lots of reading, lots of categorising, and lots of repeated patterns. The barrier has been trust. Most leaders do not want legal decisions made by a black box, and many tools have felt too generic to use on real documents.

The new wave is different. Instead of saying “ask the chatbot”, vendors are packaging specific workflows and giving teams a way to set guardrails. The Legal plugin is explicit about attorney oversight. It is positioned as acceleration and triage, not legal advice.

That is why investors are nervous. If more of this work becomes self serve inside AI platforms, some existing software and data businesses may be squeezed. Even if the reality is slower, the direction is clear.

Expect one plugin per department

The Legal plugin sits inside a wider plugin directory for Claude Code and Cowork. That directory includes tools for software development such as front end design, code review, and GitHub workflows, alongside many other integrations. The message is simple. AI platforms want to become the place work happens, not a separate tab.

For business leaders, this “one plugin per department” approach will be familiar. It mirrors what happened with SaaS. First came broad platforms, then came role specific tools, then came the integration race.

In practice, that means more choice. It also means more fragmentation. Different teams will pick different tools, and you will quickly end up with duplicated prompts, inconsistent outputs, and no shared standards.

The advantage is not the model, it is the rollout

It is tempting to frame this as Anthropic versus OpenAI, or Claude versus ChatGPT. In reality, most organisations will use multiple tools. One may lead in code and workflow integrations. Another may lead in multimodal work, sharing custom assistants, or team wide connectors.

What will matter most is whether your organisation can adopt these tools without creating risk, confusion, or tool fatigue. You need clear policies, training that fits real workflows, and a practical way to standardise how teams use AI.

If you are rolling out AI into sensitive functions like legal, start small. Pick one workflow. Define what “good” looks like. Set review points. Measure time saved and error rates. Then scale to the next team with the same playbook.

Ready to turn AI tools into real outcomes

If you are seeing teams experiment with Claude, ChatGPT, or a mix of platforms, you are not behind. You are early. The smartest move is to build a simple, repeatable rollout plan that covers governance, training, and measurable outcomes.

If you want a grounded view of what to adopt first, and how to roll it out safely across teams, book a call with gecco. We will help you map the best starting points, set guardrails, and get your people confident using AI in real work.

Source: https://claude.com/plugins/legal

Adopt AI Today
Insights
16 Feb 2026

ChatGPT introduces lockdown mode and risk labels for safer AI

OpenAI has introduced Lockdown Mode and “Elevated Risk” labels in ChatGPT to reduce prompt injection risk as AI tools connect to the web and apps. For owners and heads of operations, it is a useful signal of where extra guardrails belong before you scale adoption.

Insights
13 Feb 2026

Deep Research just got more controllable – and that’s the point

AI research tools are only as useful as the sources they’re built on. If the inputs are noisy, biased, or outdated, the output looks confident but drifts away from what you actually need. That’s why the latest improvements to Deep Research in ChatGPT matter: they’re less about “more AI” and more about better governance–clearer sourcing, tighter focus, and easier accountability.