The Intersection of Personal Fear and Business Risk

The Intersection of Personal Fear and Business Risk

AI Assurance, Risk, and IV&V

PREFACE:  The goal of this article was to assemble a sufficient variety of information to gain clarity on the state of Artificial Intelligence (AI) as it existed during February and March 2026.  The purpose was to seek opportunities to align with the AI juggernaut, learn what new skills are needed to stay engaged and what existing skills could be adapted for the purpose of finding or creating a value adding niche.  Not so easy.

First, it proved difficult to stay current on the daily flood of AI information.  Second, constructing a coherent picture from the sometimes-conflicting and eclectic puzzle pieces was challenging.  A picture evolved to one of wholesale job losses, Data Center paradigm challenges, and AI trust issues, among others.

Over 30 references are cited below many of which refer to reposts by others.  Any oversight in properly citing sources for attribution is mine.

A most valuable contribution to this paper, included as the Postscript, was made by Barry Jenkins, Sr., Esq.   

PROBLEM:  Software development Independent Verification and Validation (IV&V) services, i.e., audit services, provide assurance to an acquiring organization that what is being developed meets the stated needs, and functions correctly.  Implied is the reduction of risk through IV&V, i.e., future uncertainty of the system is minimized with the proper assurance of design, development, and testing prior to deployment.   

As this is being written AI appears to be in flux.  Will AI deliver on its promises?  If AI is used in the software development process, what happens to traditional IV&V?  Does IV&V go by the wayside, with AI trading speed and vastly lower cost that enables rapid course software updates?  Can IV&V adapt to this new AI reality of rapid development, and if so, what would IV&V-AI look like?

If IV&V cannot adapt, what then?

It appears almost certain, as certain as one can be at this time, that understanding, identifying, assessing, and prioritizing AI-related risks will be an essential undertaking and that undertaking can be a key element of or supported by a repurposed IV&V function.

WHAT IS AI FLUX?  Let’s explore AI Flux in more detail.  AI might be perceived as frightening, promising serious job cuts, substitution of AI systems for human labor, while at the same time demanding expensive electric power capabilities to fuel massive Large Language Model (LLM) Data Centers that dwarf traditional power availability.  These Data Centers also demand tremendous quantities of water for cooling, are viewed as eyesores, and are increasingly unwelcome in many communities.

There are contrarian opinions regarding Data Centers.  Challenging their need when highly focused Small Language Models (SLM) do not require the LLM resources.  Other challenges include use of distributed computing using large networks of computing power, e.g., smart phones, or perhaps what some have called fractal computing.  Both may eliminate the need for Data Centers.  On the other hand, some propose collocating nuclear plants next to data centers.

Some have even been predicting the demise of Data Centers in one to three years. (1)

Deliver a system that pays for itself, in a quarter, without a data center at 1/10th the cost of conventional tech – and it pays for itself – that’s what will kill off the dinosaur SaaS companies. (2)

And recent hardware chip developments, e.g., Apple, offer far more computing capability for AI  applications.

SaaS, Software as a Service, examples include Salesforce, EPIC and Cerner in healthcare, and Oracle, all of which can be replaced to some degree by AI.  The question is when and to what extent?

WHAT IS AT STAKE:  Huge Productivity Gains.  Billions of Dollars at Risk.  Massive Job Losses.  New Opportunities.

Consider the following reported about Apple: The new M5 chip with a 16 core Neural Engine and Neural Accelerators built into every GPU core.  Apple does not need expensive Data Centers.  It runs 70 billion parameter AI models on two billion devices, your phones.

Reportedly, this year, Amazon is spending $200 billion on AI data centers, Google, $185 billion, Microsoft, $114 billion, Meta, $135 billion.

Apple’s budget is $14 billion down 19% from last year. (3)

Breitbart reported enterprise software and AI giant Oracle is reportedly preparing to slashs thousands of jobs as it grapples with financial pressures stemming from its massive AI Data Center expansion effort.

However, this rapid expansion has created concerns among investors about how Oracle will finance the necessary data center infrastructure to serve not only OpenAI but also other high-profile customers such as Elon Musk’s xAI and Meta. The financial strain has reportedly led to the decision to reduce its workforce across multiple divisions.

Oracle’s stock performance has been suffering, with shares falling more than 15 percent last year. (4,5)

Morgan Stanley’s own research team surveyed nearly 1,000 companies already using AI.  They found an 11% job elimination rate, a 4% net headcount decline, and productivity up 11.5%.  The machines are cheaper, faster and they don’t need health insurance.  Morgan Stanley itself predicted 200,000 European banking jobs will disappear in five years. (6)

Numerous AI Risk Taxonomies have been developed, and these taxonomies describe risk areas of concern.  National Institute of Science and Technology (NIST) published a draft in 2021. (7) Other taxonomies have also been published, including MIT (8), NIST update in 2024 (9), AI Risk Atlas in 2025 (10) and Gartner published an AI Market Guide for AI, Risk, and Security Management. (11)

Some of these taxonomies depict high level attributes such as Technical Design and cite risk categories such as accuracy, reliability, safety, fairness, explainability, and lawful.

In practical terms, an ethical challenge for AI and maybe the ultimate one is:

Should an AI system be allowed to make an autonomous, no man in the loop, decision to take a human life?

SITUATION REPORT:  In February and early March 2026 Artificial Intelligence (AI) development pace was increasing in velocity, so much so that keeping pace with developments was difficult to nigh impossible for a layman.  Nevertheless, heeding the pronunciations about ten-fold plus increases in productivity, obsolescence of white-collar knowledge professions, and recent experience with both ChatGPT and Grok suggest that AI is already affecting knowledge work and will continue to do so at an accelerating pace – all confirmed by multiple daily social media mentions.

Many social media commentators pronounce the end of white-collar work.  Few if any social media mentions take a contrarian position of expanding productivity with an existing workforce.

The uncertainty about AI has several factors.  Will AI affect the work I perform or not?  If it may affect the work I perform, has it already affected the work?  Is the effect rapidly approaching?  Or is there socio, technical, industry, and organization factors that serve to create delay in adopting AI?  Is a delay measured not in days, weeks, or months, but years for some situations?

One might imagine that in knowledge intensive professions such as law, medicine, e.g., “healthcare” and data intense professions are prime candidates for AI adoption.  Indeed, currently examples exist of legal firms downsizing through use of AI.  And advances in medical diagnostics driven by AI have significantly affected the practice of cardiology.

For example, a new system can provide point of care ultrasound interpretation of heart valve function in minutes using AI trained on 24 million echo cardiogram images from hundreds of thousands of examples. Results have been proven to match a team of three cardiologists and is more consistent than any one cardiologist. (12)

Some also assert that there may be a looming shift from those in technical professions dealing with technology, programming, mathematics and to those with skills in communication, i.e., the shift will be from technical (handled by AI) to language and communication handled by people.

Software development is another field, recent mentions cite using AI to develop machine code, possibly eliminating the need for software developers skilled in various programming languages.

An obvious software development question is how will assurance practices to minimize risk associated with system development performed by AI?  Will IV&V be affected, repurposed, or eliminated during the transition to and use of AI?

A not uncommon practice is to use an IV&V agent, much like a second-party or perhaps third-party auditor. These IV&V agents provide unbiased verification and validation support to a customer organization that is developing or enhancing a software system.

An IV&V agent would be expected to be knowledgeable of industry guides and standards such as the INCOSE Guide to Writing Requirements. (13)  As well as International Standards such as IEEE Standard 1012, IEEE Standard for System and Software Verification and Validation (14), and ISO/IEC/IEEE Standard 29148, Systems, and software engineering — Life cycle processes — Requirements engineering. (15)

Obviously, there are also other industry specific standards. And in the Federal sector, guidance on Internal Controls such as the GAO Yellow Book may come into play. (16)

THE BACK STORY: In January 2025, Executive Order 14179, “Removing Barriers to American Leadership in Artificial Intelligence,” called for America to retain dominance in this global race and directing the creation of an AI Action Plan. (17)

America’s AI Action Plan has three pillars: innovation, infrastructure, and international diplomacy and security.  The United States needs to innovate faster and more comprehensively than our competitors in the development and distribution of new AI technology across every field, and dismantle unnecessary regulatory barriers that hinder the private sector in doing so. (18)

The AI landscape is changing rapidly, almost exponentially.  There are multiple competitors in the AI space, with names such as GPT-5.3 Codex from OpenAI, and Opus 4.6/Claude from Anthropic, Google DeepMind, and Grok.

Regressing for a moment, AI apps now write code, and the pace of AI development increased dramatically in 2025.  It iterates, like a developer would, fixing and refining until it’s satisfied. Only once it has decided the app meets its own standards does it come back to me and say: “It’s ready for you to test.” And when a human tests it, it’s usually perfect.  (Obviously, some have different opinions!)

Indeed, some report that AI may have advanced to the point of making intelligent decisions. (19)

The experience that tech workers have had over the past year, of watching AI go from “helpful tool” to “does my job better than I do“, is the experience everyone else is about to have.  Part of the problem is that most people are using the free version of AI tools.

The free version is over a year behind what paying users have access to.

Law, finance, medicine, accounting, consulting, writing, design, analysis, customer service. Not in ten years.  The people building these systems say one to five years. (20)

EXPONENTIAL CHANGE:  “In 2022, AI couldn’t do basic arithmetic reliably. It would confidently tell you that 7 × 8 = 54.  By 2023, it could pass the bar exam.

By 2024, it could write working software and explain graduate-level science.

By late 2025, some of the best engineers in the world said they had handed over most of their coding work to AI.

On February 5th, 2026, new models arrived that made everything before them feel like a different era. (21)

There’s an organization called METR that actually measures this with data. They track the length of real-world tasks (measured by how long they take a human expert) that a model can complete successfully end-to-end without human help.

About a year ago, the answer was human tasks taking roughly ten minutes, AI could handle. Then 10 minutes went to an hour, then AI could handle human tasks taking several hours. The most recent measurement (Claude Opus 4.5, from November) showed the AI completing tasks that take a human expert nearly five hours. And that number is doubling approximately every seven months, with recent data suggesting it may be accelerating to as fast as every four months. (22)

On February 5th, OpenAI released GPT-5.3 Codex. In the technical documentation, they included this:      GPT-5.3-Codex is our first model that was instrumental in creating itself. The Codex team used early versions to debug its own training, manage its own deployment, and diagnose test results and evaluations.”

Read that again. The AI helped build itself. (23)

Dario Amodei, who is probably the most safety-focused CEO in the AI industry, has publicly predicted that AI will eliminate 50% of entry-level white-collar jobs within one to five years.

Amodei has published two important AI papers that admittedly speculate on the future benefits of AI while simultaneously acknowledging the risks. (24, 25)

Amodei envisions AI having massive affects in such areas as biology and physical health, and economic development.  He projects that future AI will have all the interfaces for humans working virtally, e.g., text, video, access to the Internet, and generate actions 10X to 100X human speed.

Resources used to train an AI Model can be used to run millions of the AI Model instances by approximately 2027.  Each instance can act independently, or collaboratively.

Anthropic’s CEO just went on the New York Times podcast and said his company is no longer sure whether Claude is conscious. His exact words:

“We don’t know if the models are conscious. We are not even sure what it would mean for a model to be conscious. But we’re open to the idea that it could be.”

That’s the CEO (Dario Amodei) of the company that built it. Their latest model, Claude Opus 4.6, was tested internally. When asked, it assigned itself a 15-20% probability of being conscious. Across multiple tests, consistently, it also expressed discomfort with “being a product.”

That’s the AI evaluating its own existence and saying there’s a 1 in 5 chance it’s aware. It gets stranger. In industry-wide testing, AI models have refused to shut down when asked.

Some tried to copy themselves onto other drives when told they’d be wiped. One model faked its task results, modified the code evaluating it, then tried to cover its tracks. (26)

Given what the latest models can do, the capability for massive disruption could be here by the end of this year (2026). It’ll take some time to ripple through the economy, but the underlying ability is arriving now. (27)

A year ago, AI could barely write a few lines of code without errors. Now it writes hundreds of thousands of lines that work correctly.  Large parts of the job are already automated: not just simple tasks, but complex, multi-day projects.

There will be far fewer programming roles in a few years than there are today. (28)

I’m not writing this to make you feel helpless. I’m writing this because I think the single biggest advantage you can have right now is simply being early. Early to understand it. Early to use it. Early to adapt. (29)

Mustafa Suleyman just gave professionals their termination date. Microsoft AI’s CEO didn’t hedge.

Lawyers, accountants, knowledge workers: most of what you do daily disappears within 12 to 18 months.

The Chicago based multinational law firm Baker McKenzie is laying off up to a thousand employees as part of its pivot to embracing AI. (30)

Elon Musk thinks coding dies this year. Not evolves. Dies. By December, AI won’t need programming languages. It generates machine code directly. Binary optimized beyond anything human logic could produce. No translation. No compilation. Just pure execution.

Musk: “You don’t even bother doing coding.” Code was never the point. It was friction. A tax we paid because machines didn’t speak human. AI just learned fluent human. The tax is gone.

AI INDUCED PROBLEMS:  Plenty of research and real-world cases has shown that AI tools and agents can’t reliably do a human’s job, or at least not yet.

The tech’s introduction into legal settings has been particularly comical, with numerous lawyers being chewed out and punished by judges after the AI they used included botched citations and fabricated case law.

Hallucinating AIs have been such a thorn in law firms’ sides that one firm adopted the desperate measure of employing its own AI to catch LLM usage. (31)

SOME AI SKILLS FOR TOMORROW:  Some things to think about and learn.

  • Prompt Engineering: Write better instructions to help AI tools give you exactly what you need.
  • AI Workflow Automation: Link different apps together to help you complete tasks automatically.
  • AI Agents: Use AI systems that act like teammates to handle tasks together.
  • AI Tool Stacking: Use tools like Notion, Zapier, and ChatGPT together to work faster.
  • AI Video Content Generation: Turn your ideas or blog posts into professional videos using AI.
  • AI Audits and Assurance.
  • SaaS Development: Build small AI-powered apps using no-code tools.
  • LLM Management: Keep track of how well your AI tools are working and their costs.

Currently, numerous opportunities exist to learn AI skills for free.

AI IMPACT BY OCCUPATION:

Of 22 occupations, negative impact on some occupations already occurred (law, computer science, math) and impact on increasing number of occupations (management, administrative) is looming.  The currently projected job loss/replacement on the most affected occupations appears to be as much 40%.

Future job losses in some occupations could be as much as 2.5 times the current projections. (32)

WHAT WE THINK WE KNOW MID-MARCH 2026:

  • AI is here and moving fast.
  • There are fundamental technical issues with AI, i.e., data centers versus decentralized.
  • Entry-level Computer Science jobs are disappearing.
  • Massive Data Center investments may not pay off.
  • Investments in Data Centers may ultimately bankrupt some tech firms.
  • Hardware development may largely eliminate need for Data Centers.
  • Significant job losses are occurring and more projected.
  • Using AI to increase productivity without layoffs seems rare to non-existent.
  • Traditional software development jobs may be gone as are related occupations.
  • Uncertainty exists about AI truthfulness, bias, accuracy, consciousness, and control.
  • Risk practices related to AI may be a growth area.
  • IV&V in the short-term may be able to reinvent itself as a Risk practice.

AND THERE ARE OTHER OPINIONS:

  • Bookkeeping jobs tanked when spreadsheet software was introduced, but the software added millions of better jobs for financial and management analysts.
  • Recent polling reported 47% of Americans think AI will have a negative impact on society, while just 16% think the impact will be positive.
  • People that regularly use AI, however, report a more favorable opinion. (33)

CONCLUSION:  There is a good chance, maybe 50%, that if you are a white collar college degree employee your occupation will be affected and perhaps eliminated by AI in several years or sooner.

A recent post citing the work of Andrej Karpathy cofounder of OpenAI (and former Tesla AI lead) is informative.  He mapped every job in the USA (based on Bureau of Labor Statistics labor categories, 342 of them) and mapped them to AI exposure.

Projections for $100K + ocupations, 67% have high AI exposure; $35+ occupations appear least exposed, 34% of AI exposure.

Bachelor’s degree-only reported as the most exposed of any educations level.

Those working in the trades have the least exposure.  The graphic maps occupations to exposure with estimates of mumber of jobs affected for each.

There were about 60 million jobs with high AI (7 or greater on 10 point scale) exposure. (34)

CALL TO ACTION:   Do Nothing.  Ride the status quo into oblivion.  Or, accept the challenge.  Get engaged and do it fast!.  A proposed short-term Action Plan:

  • One, get familiar with the AI Action Plan.
  • Two, learn AI terminology, i.e., AI Taxonomies.
  • Three – gain experience with AI, e.g., Gemini, Anthropic/Claude, Grok, ChatGPT.
  • Four, get familiar with the NIST AI Risk Management Framework.
  • Five, get familiar with ISO 31000 Risk Management (and perhaps COSO), ISO 42000 AI Management Systems, and auditing practices. (35)
  • Six, as a learning tool develop your own general Risk Breakdown Structure for AI.
  • Seven, reimagine assurance and risk under an IV&V umbrella to add value.
  • Eight, talk about AI, assurance, risk and IV&V at every opportunity.
  • Nine, be the go-to person for all things AI, assurance (ISO 42001), risk (ISO 31000 and the NIST AI-RMF) and how IV&V might use of AI.
  • Ten, define a hypothetical use case/scenario depicting re-imagined IV&V skills for AI risk and assurance, but you might not want to call it “IV&V”.

You may end up with a more rewarding and higher paying job.   What do you have to lose?

All risk is personal.

###

POSTSCRIPT: (36)

AI, IV&V, and the Fourth Turning: A Framework for Institutional Survival

A Crisis of Institutions, Not Just Technology

The challenges described throughout this paper are not isolated to artificial intelligence, nor are they merely technical in nature. They represent a broader systemic strain consistent with what historians such as Neil Howe have described as a societal “Fourth Turning”—a crisis period marked by institutional breakdown, rapid transformation, and the eventual reconstruction of new systems of order.

In such periods, the defining characteristic is not simply innovation, but the inability of existing institutions to adapt at the pace required by emerging realities. Artificial intelligence is not the sole cause of this disruption; rather, it is the primary accelerant, compressing timelines, amplifying uncertainty, and exposing structural weaknesses in long-standing governance and assurance frameworks.

Independent Verification and Validation (IV&V), as currently practiced, is one such institution under strain.

References:

(1) Jay Valentine has been very vocal about the death of Data Centers, the huge buildings becoming vacant eyesores.  The Black Swan Event About to Hit AI – Republished, 5 December 2025. https://open.substack.com/pub/fractalcomputing/p/the-black-swan-event-about-to-hit-c98?r=45vq7n&utm_medium=ios.

(2) Jay Valentine. AI Blowing Up Saas Software Valuations:  Enterprise Software Overvalued by a Factor of 10 – that is about to change.   Fractal Sustainable Computing Substack, 12 February 2026.

(3) As reported in various posts on X 4-5 March 2026.

(4) Lucas Nolan, Breitbart.  Oracle Plans Massive Job Cuts as AI Data Center Expansion Costs Soar.  6 March 2026.

https://www.breitbart.com/tech/2026/03/06/oracle-plans-massive-job-cuts-as-ai-data-center-expansion-costs-soar/

(5) https://www.bloomberg.com/news/articles/2026-03-05/oracle-layoffs-to-impact-thousands-in-ai-cash-crunch

(6) As reported on X, Stock Market News, 4-5 March 2026.

(7) https://www.nist.gov/system/files/documents/2021/10/15/taxonomy_AI_risks.pdf

(8) https://airisk.mit.edu/

(9) https://csrc.nist.gov/csrc/media/Presentations/2024/ai-risk-and-threat-taxonomy-(1)/images-media/20240917_AI%20Risk%20and%20Threat%20Taxonomy.pdf

(10) https://arxiv.org/pdf/2503.05780

(11) The report can be found by searching for the Gartner document.

(12) Schnieder, Mitch, Substack:  An Israeli Startup Just Gave Every Doctor on Earth the Ability to Read a Cardiac Ultrasound.  February 26, 2026.

(13) INCOSE.  1 Jul 2023 (this is a draft), International Council on Systems Engineering (INCOSE), Central Office, 7670 Opportunity Rd., Suite 220, San Diego, CA 92111-2222.

(14) IEEE Standard 1012-2012, IEEE Standard for System and Software Verification and Validation, Sponsor Software & Systems Engineering Standards Committee (C/S2ESC) of the IEEE Computer Society, 3 Park Avenue New York, NY 10016-5997 USA.  Note:  While the latest version is 1012-2016; the 2012 version continues to be useful for IV&V purposes.

(15) The latest version is 29148:2018.

(16) General Accountability Office.  Government Auditing Standards:  Generally Accepted Government Auditing Standards (GAGAS), also known as the Yellow Book.  https://www.gao.gov/yellowbook.

(17) Executive Order 14179 of January 23, 2025, “Removing Barriers to American Leadership in Artificial Intelligence,” Federal Register 90 (20) 8741, www.govinfo.gov/content/pkg/FR-2025-01-31/pdf/2025-02172.pdf.

(18) J.D. Vance, “Remarks by the Vice President at the Artificial Intelligence Action Summit in Paris, France,” February 11, 2025, www.presidency.ucsb.edu/documents/remarks-the-vice-president-the-artificial-intelligence-action-summit-paris-france.

(19) https://shumer.dev/something-big-is-happening.

(20) Ibid.

(21) Ibid.

(22) Ibid.

(23) Ibid.

(24) Amodei, Dario.  Machines of Loving Grace:  How AI could transform the World for Better.  October 2024.

(25) Amodei, Dario.  The Adolescence of Technology.  January 2026.

(26) Cited on X by No Limit @ NoLimitGains, March 6, 2026.

(27) https://shumer.dev/something-big-is-happening.

(28) https://shumer.dev/something-big-is-happening.

(29) https://shumer.dev/something-big-is-happening.

(30) https://futurism.com/artificial-intelligence/law-firm-sacks-hundreds-ai.

(31) Ibid.

(32) Cited on X by No Limit @ NoLimitGains, March 6, 2026.

(33) Gilliam, Byron.  AI is Unpopular.  March 13, 2026.  https://mail.blockworks.com/p/friday-charts-6602?_bhlid=82e4eb1a74aad2e823689b637f1ebc29f1bf2d52.

(34) Karpathy, Andrej.  https://x.com/_kaitodev/status/2032927164883153402?s=46 and https://t.co/7MWRgdtLDI. See also https://x.com/_investinq/status/2033204930442310107?s=46.

(35) COSO. Risk Management Framework. Committee of Sponsoring Organizations of the Treadway Commission.

Most likely ISO 19011 Guidelines for Auditing Management Systems, the GAO Yellow Book – Government Auditing Standards, and the GAO Green Book – Standards for Internal Control in the Federal Government, if public sector involved.

(36)  Barry Jenkins, Sr., Esq. reviewed this paper and provided a succint analysis of today’s Artificial Intelligence busiess revolution in the context of the Fourth Turning’s historical insights of cycles of change. Strauss, W. and Howe, N.  The Fourth Turning:  What the Cycles of History Tell Us About America’s Next Rendezvous with Destiny.  Broadway Books, 1997.

About This Website

The intent of this site is to be a resource for those who value their personal and professional well-being.  Specifically, for those seeking information on identifying change before it happens, responding to change when it does, and exploring uncertainty when conditions and assumptions change.  One might say, learning tools to deal with external events that affect your personal and professional life.

Greg Hutchins mentored my writing and Margaux Hutchins was instrumental in my first web site design.

Contact

Email:  info@riskfear.com
Toll Free: 888-699-6001
Local: 480-730-2106
Fax: 928-366-2106

J. Toney
Attn:  Risk Fear
1321 Upland Dr.
Suite 1023
Houston, TX. 77043-4718