Originally published on https://www.cio.com/article/4028051/designingfor-humans-Why-most-enterprise-Adoptions-of-ai-fail.html
Building technology has always been a messy company. We are constantly arranged with stories about project failure, wasted money and even the disappearance of entire industries. It is safe to say that we have some work to do as an industry. Adding AI to this mix is like casting gasoline on a smoldering flame – there is a real danger that we can burn our companies on the ground.
In essence, people build technology for people. Unfortunately we allow technological rages and modes to go astray. I have sent AI products for more than ten years – on Workhuman and earlier in financial services. In this piece I will take you through hard -earned lessons that I learned during my journey. I have established five principles to help decision makers are technical, most of them are about people, their fears and how they work.
5 principles to help decision makers
The path to excellence lies in the next adult path: trust → federated innovation → concrete tasks → implementation statistics → Build for change.
1. Trust in performance
Companies have a large number of different ways to measure success in implementing new solutions. Performance, costs and security are all factors that must be measured. We rarely measure trust. That unhappy The trust of a user in the systems is an important factor for the success of AI programs. An excellent Black-Box solution dies on arrival if nobody believes in the results.
I once run an AI prediction system for American consumer financing at a leading bank. Our storage costs were huge. This was not helped by our credit card model, which spewed 5 TB of data every day. To alleviate this, we have found an alternative solution that processed the results in advance with the help of a Black box model. This solution used 95% less storage (with a cost reduction to match). When I presented this idea to senior stakeholders in the company, they immediately killed it. Regulators would not trust a system where they could not fully explain the output. If they could not see how each calculation was performed every step of the road, they could not trust the result.
A recommendation here is to draw up a clear ethical policy. There must be an open and transparent mechanism for staff and users to submit feedback on AI results. Without this, users can feel that they cannot understand how results are generated. If they do not have a voice in changing ‘wrong’ outputs, it is unlikely that a transformation will win the hearts and spirits throughout the organization.
2. Federal innovation about central control
AI has the potential to deliver innovation with rather unimaginable speeds. It lowers the costs of experiments and acts as an idea generator – a sounding board for new approaches. This allows people to generate multiple solutions within a few minutes. A great way to delay all innovation is by leading it by a central body/committee/approval mechanism. Bureaucracy is where ideas die.
Nobel-winning philosopher Fa Hayek once said”There are orderly structures that are the product of the action of many men, but not the result of human design.” He argued against Central Planning, where a person is responsible for results. Instead, he preferred ‘spontaneous order’, where systems arise from individual actions without central control. This, he argues, is where innovations such as language, law and economic markets come to the fore.
The path between control and anarchy is difficult to navigate. Companies must find a way to “keep the bird of innovation in their hand”. Hold too tight – kill the bird; Keep too loose – the bird flies away. Unfortunately, many companies hold too tight. They do this by trusting a heavily on a command-and-control structure-in particular groups such as legal, safety and purchasing. I have seen them crush with a single, risky statement. For creative individuals who innovate at the edges, even the prospect of having to present their idea to a committee can have a horrifying effect. It is easier to do nothing and stay away from the ‘big hand of bureaucracy’. This kills the bird – and kills the delicate spirit of innovation.
AI can charge innovation options for each individual. For this reason we have to federate innovation throughout the company. We must encourage the most senior executives to state in clear language what the appetite is at the risk of AI and to explain what the crash barriers are. Then do not have teams experimented by bureaucracy. Central functions shift from gatekeepers to stewards and only maintain non-negotiation. This enables us to plant seeds throughout the organization and to harvest the best return for everyone.
3. Concrete tasks about abstract work
Early AI pioneer Herbert Simon is the father of behavioral sciences, a noble and turing prize winner. He also invented the idea of limited rationality. This idea explains that people settle for “good enough” when options grow further than a certain number. Generative AI follows this approach (possibly because it has been trained on human data, it mimics human behavior). Generative AI is stochastic – every time we give the same input, we get a different output – a “good enough” answer. This is very different from the classic model that we are used to – given the same input, we get the same output every time.
This stochastic model, where the result is unpredictable, makes modeling top-down use cases even more difficult. In my experience, projects only clicked when we were with users and really understood how they worked. Early in our development of the Workhuman AI assistant, generic requirements at high level gave us very strange behavior and were unpredictable. We had to rewrite the use cases as more detailed, low requirements, with a thorough understanding of the built -in behavior and tolerances. We have also registered every interaction and used this to refine model behavior. In this world, the general design of high -level solutions is GISwerk.
Leaders at all levels must come closer to the details of how the work is done. Top-down general statements are of the table. Instead, teams must define ultra-specific use cases and design confidence intervals (eg “90 % of the AI-produced code must pass unit tests on the first run”). In the world of generative AI, Clarity beats abstraction every time.
4. Adoption about implementation
Buying a tool is simple; Changing behavior is brutal. A top-down edict can help people take the first step. But measuring adoption is the wrong way to stimulate change-in place of this gives the box-colored “adoption” but shallow, half-implemented use.
Managers are just as many the victims of ridges and modes as every online shopping addict (as soon as you replace management methods, sparkling new technologies and FOMO for the latest styles from Paris). And no artificial general intelligence is needed to note that the trend is hot for AI, hot, hot! Managers must tell an AI story and show benefits because they are under pressure from shareholders, investors and the market in general. Through my network in IASA I saw this result in general in edicts to measure “AI acceptance”. Unfortunately, this has had very mixed results so far.
Human nature changes changes. A good manager has countless competitive care, including running a group, taking on business challenges, hiring and retaining talent, and so on. When a new program to accept an AI strategy comes from managers, the manager – who tries to protect their team, meets the needs of the company and keeps their heads above water – often make a compromise by taking on the tooling, but not thoroughly implement it.
At Workhuman we have discovered that measuring adoption (and not just for AI) is not the right way to start a transformation. It measures the start of the race, but completely ignores the stage. Instead of vanity statistics, when we measure success, we measure outcome statistics (for example, changed work process, manual steps with pension and company factors). By measuring the implementation and impact, we avoid the ‘Box-Ticking’ drop in which so many companies fall.
From our decade-plus experience in AI we also understood that AI-transformation is part of a larger support system, including education, tools and a supporting internal community. We work together with an Irish university to carry out internal diploma programs in AI and to offer AI tooling to all employees, regardless of their role. We have also promoted internal communities at all levels to stimulate the concept. This helped us to deliver AI solutions, both internally and externally, as evidenced by the release of our AI assistant, a transformational AI solution for the HR community.
5. Change about choice
The AI landscape shifts monthly, with a continuous stream of new models and sellers locked up in a constant race. A choice that you lock in a single technological stack can make your company look like a horse and buggy clip knocks through the center of a modern city in the near future.
When we started looking at models for our new AI assistant, we were faced with various challenges. First, what can any model do? There were few useful benchmarks, and those who existed did not offer much in the way of insights of business capacities. We also had trouble measuring how the various strengths weighed against the weaknesses of other models and vice versa.
In the end we agree on one core architecture principle – everything we design must be interchangeable. In particular, we must be able to change the Core Foundation models that underlie the solution. This allows us to constantly adjust ourselves in the past year. We test every new model after release and work out how everyone can best be used to offer our customers a great experience.
Because models change so quickly, leaders must have the opportunity to exchange AI models as a core principle. Companies must call the model behind a thin layer of abstracts, while decorating pomps and evaluation -harnesses so that new models can fall at night. The ability to exchange horses in the middle of the race may be the competitive advantage that is needed to win at a market today.
AI for leaders
Technology choices are leadership choices. Who decides to automate something? Which ethical red lines are immobile? How do we protect every person who works with us? The acceptance of AI is a leadership challenge that cannot be delegated to consultants or individual contributors. How we implement AI now will define future successes and failures of the business world. It is a challenge that must be powered by well thought out leadership. Every leader must understand and deeply understand the AI landscape and find out how they can best enable their teams to build the companies of tomorrow.
#Designs #people #Enterprise #Adoptions #fail


