A risk-based AI architecture, in your own environment.
The EU AI Act is not a ban on enterprise AI. It is a framework for classifying risk, assigning obligations, and making sure organisations can explain what their systems do, why they do it, and who remains accountable. The hard question is not “are we using AI?” It is “what kind of AI use case is this, what risk category does it fall into, and what controls do we need around it?” Most enterprise use cases sit far away from prohibited practices. They still raise real questions about transparency, logging, human oversight, data governance, and record-keeping. Invisibles is designed for that operational reality.
The Act is risk-based on purpose.
The AI Act is built around risk tiers. Prohibited uses sit at the top. High-risk systems carry the heaviest obligations. Limited-risk systems trigger transparency duties. General-purpose AI models have their own layer of obligations, especially for model providers. That structure matters because it stops companies from treating every AI use case as if it were the same.
Most Salesforce and enterprise workflow use cases are not in the prohibited category, and many are not high-risk. But that does not mean “no obligations.” It means you still need to assess the use case, document the reasoning, and apply the right controls. A support-summarisation Prompt is not the same as a system that materially affects employment, credit, access to services, or biometric identification. The architecture should let you make those distinctions cleanly.
Deployer obligations are where most customers live.
For most Invisibles customers, the relevant role is deployer. You choose the use case. You decide which data is in scope. You decide whether a human reviews the output or whether a Skill can take action. You decide which model provider to use. That is why the AI Act’s deployer obligations matter more in practice than abstract model debates.
Article 26 is the anchor. Deployers of high-risk systems have obligations around use according to instructions, human oversight, monitoring, and record-keeping. Even outside high-risk categories, the same governance habits are becoming standard enterprise practice. Invisibles supports that by making the system boundary explicit. Prompts are governed units. Skills are callable tools with OpenAPI-decorated interfaces. Data Context Mappings pin the exact fields in scope. Audit records what happened. That is the kind of structure compliance teams need when they ask, “show me how this system works.”
Transparency, logging, and human oversight.
One concern comes up in almost every enterprise AI review: trust. Can we trust the system? Can we explain what it did? Can a human intervene? Those questions map directly to the AI Act’s emphasis on transparency and oversight, especially Articles 13 and 14 for high-risk systems.
Invisibles is designed to support those controls. Agents can be deployed as assistive systems rather than fully autonomous ones. Skills can be permissioned and limited to specific users, profiles, or channels. Audit creates a durable record of runs, actions, and outputs. That does not automatically make a use case compliant, but it gives you a way to implement meaningful human oversight instead of treating it as a policy sentence with no technical backing.
GPAI model providers and customer choice.
The AI Act also distinguishes between the provider of a general-purpose AI model and the company deploying an AI system built on top of it. That matters because Invisibles is not itself the foundation model. Customers may choose Anthropic, OpenAI, or cloud-native model services in AWS or Azure, depending on region, performance, and policy requirements.
That separation is useful in practice. Model-provider obligations sit with the provider of the GPAI model. System-level obligations for your use case sit with you as deployer. Invisibles sits in the middle as the governed application layer: the place where data is prepared, masked, tokenized, routed, permissioned, and logged before and after model interaction. That is often the missing layer in enterprise AI architecture, and it is the layer most relevant to internal governance.
Record-keeping, conformity, and rollout timing.
The AI Act phases in over time, with different obligations applying on different dates depending on the category and role. Buyers do not need a perfect final-state answer on day one, but they do need a system that will not become impossible to govern later. That is why record-keeping matters now, even for use cases that are not high-risk today.
Articles 9, 13, 14, and related provisions all point in the same direction: risk management, transparency, oversight, and documentation. For use cases that fall into high-risk categories or trigger public-sector deployment obligations, customers may also need to assess fundamental-rights impacts as part of their own governance process. If a use case later moves into a more sensitive category, you should not have to rebuild the entire architecture just to prove what happened. Invisibles gives you a logging and evidence layer from the start, useful for internal review now and for more formal conformity or supervisory discussions later if a use case warrants it.
AI Act obligation to product mechanism.
Risk classification maps to the customer’s own use-case review process, supported by clear system boundaries in Prompts, Skills, and Data Context Mappings. Transparency obligations map to the ability to document what a Prompt does, what data it sees, and what channel exposes it. Human oversight maps to assistive deployment patterns, permissioned Skills, and reviewable actions. Logging and record-keeping map to immutable audit with exportable evidence. Data governance maps to pinned field mappings, masking, tokenization, and customer-controlled retention. Deployer accountability maps to the fact that the system runs in your own AWS or Azure account under your IAM and policy controls.
Questions compliance teams ask.
Is Invisibles itself a high-risk AI system under the EU AI Act?
Not by default. Risk classification depends on your specific use case, sector, and deployment pattern — not on the fact that Invisibles is used. A support-summary Prompt is not the same as a system that materially affects employment, credit, or access to services.
Who is the deployer under the AI Act?
The customer is typically the deployer. You choose the use case, the data in scope, the model provider, and the operating controls, because the system runs in your own environment under your IAM.
Does Invisibles provide human oversight controls?
Yes. You can deploy assistive workflows, permission Skills, restrict channels, and use audit records to support review and intervention. Agents can surface recommendations without forcing full automation.
How does Invisibles help with transparency obligations?
It makes the system legible. Prompts, Skills, Data Context Mappings, and audit logs create a concrete record of what the system is configured to do and what it actually did at each run.
Are model-provider obligations the same as deployer obligations?
No. GPAI model providers and system deployers have different obligations under the Act. Most Invisibles customers care primarily about the deployer side because the customer chooses the model and the use case.
When do these obligations apply?
The Act phases in over time, with different obligations applying on different dates depending on the category and role. Prohibited-practice provisions came first; high-risk-system obligations follow. Record-keeping and governance habits are worth putting in place now rather than retrofitting later.
This page is for informational purposes only and is not legal advice. A Data Processing Addendum is available on request; email security@invisibles.app. Customers should review their specific obligations with their own privacy, legal, and compliance counsel.
Need a session with your AI governance team?
Book 30 minutes. We walk through deployer obligations, human-oversight patterns, and how the audit layer lines up with your internal AI policy.