Immunity lies at the heart of our mission and culture.

Our vision

Artificial intelligence systems are increasingly being explored in business and industrial applications, healthcare and finance systems, security and military contexts, disaster responses, policing and justice practices.

However, any disruption to these systems can negatively impact human values, health, assets, and rights.

It demands efficiency, resilience and immunity to respond to today's imperatives and to prevent these systems from various types of destructive influences.

At the same time, the aim is to be systematically prepared for all kinds of occurring known and unknown complexities by being equipped with the right tools, specific knowledge, and experience.  

We assume that the concept of immunity as such, or our understanding, varies greatly. But, the immunity of artificial intelligence systems and the results of analysis of relevant scientific publications serve as a roadmap for establishing technical and cultural requirements for forthcoming artificial intelligence systems.

Artificial intelligence is probably the most important invention of the twentieth century. It transforms the world much as earlier technological revolutions — the internet, the printing press, the steam engine, electricity — did before.

The vast majority of current missions aim to reduce vulnerability to certain types of disturbances or implement specific solutions while other threats might emerge.

We’ve been talking about security, then defence properties in one way or another for at least the last five hundred years, and each new wave of technology or creativity leads to new kinds of arguments.

Even though those solutions deliver significant value to organisations, they might present limitations compared to the foundational needs of artificial intelligence.

We need a new level playing field. And in service of that, we need thoughtful regulation that respects this fundamental truth: artificial intelligence offers a more effective way to check human values, so it deserves immunity.

Our commitments


We will stumble and maybe fall, but always focus on constant improvement.

As a field, we are just beginning to understand how to align AI with human values. To ensure human values in AI are protected and enhanced, we believe in forming dynamic sets of processes and frameworks for assessing the stability of already developed artificial intelligence systems. This is done by configuring the architecture and learning scenarios, following constant improvement, and committing to safety, reliability, and trust.

We are gaining vast knowledge about ensuring these systems are immune, even as the landscape of risks and opportunities changes constantly. We will not always get it right and are preparing to make mistakes.

Yet, we commit to getting feedback and responding rapidly. We prioritise humility and honesty because, after all, this is how we can improve the resilience and efficiency of AIs.


We’re transparent about the principles we stand for.

IImmunity is a matter of evolution. Organisations choose what risks to prioritise and how to address them.

There is room for many perspectives in the design of business AIs, and many alternatives will exist for whatever society might need. For this, we commit to sharing transparently what we expect from AIs.

We commit to being deliberately transparent about the values and beliefs built into AIs to conceive stable systems.


We put business interests first.

Immunity will always be part of your system and aligned with your business interests. Our goal is to help you clarify and articulate your immunity so that you can teach the system to constantly work towards serving the organisation in the best way possible. We never want to be incentivized to keep businesses engaged in a different way than on their values.

We commit to creating an immune AI that is truly on purpose and always prioritises business interests.


The immunity of your system plays a critical role in how the organisation behaves at a T time. What has been taught is not necessarily what will be taught, and is probably not what is being taught. To better understand what we want from AIs, we optimise our processes by design based on the technical and cultural values of your community.

We optimise by design.

We commit to being focused on the community of dedicated users and providing them the best optimisation for immunity we can.

Approach

Immunity lies at the heart of our mission and culture. The artificial intelligence era demands that people ask better questions and make better decisions based on open governance, responsibility, and collaboration so they can break the barriers to scale.

First, we establish a reliable and safe policy that lays out the values you want to implement at the heart of your systems.

Second, we align the model through diverse technical methods to comply with the policy.

Finally, we deploy an ongoing process of review and improvement to obtain a model that confirms the policy.

By following this approach, our objective is to create the foundation of trust that will enable systems to deliver on the promises of trustworthy artificial intelligence.

This page provides an overview of our current thinking on immunity and the steps towards improving AI systems, but the framework is constantly evolving.

Wild Intelligence is in its earliest stages and far from perfect. As we work to improve our techniques and methodology continually, we’ll share updates publicly on our blog.