Introduction
The Federal Aviation Authority (FAA) often teaches about the The Five Hazardrous Attitudes which are psychological tendencies leading to unsafe decisions if left unchecked. It is perhaps not surprising that these five attitudes can be applied to the usage of AI as well, given the common theme of engineering and complex systems in both aviation and AI.
What are the five hazardrous attitudes identified by the FAA?
- Anti-authority: “Don’t tell me”
- Impulsivity: “Do it quickly”
- Invulnerability: “It won’t happen to me”
- Macho: “I can do it”
- Resignation: “What’s the use?”
Let’s look at how these five attitudes can manifest in the usage of AI by organizations.
Anti-authority
How it manifests: Organizations either defer to brand/positional authority (Big 4 consultants, vendor sales teams, senior executives) on technical AI decisions, or let technical teams make business/strategic decisions without domain context. The mismatch creates blind spots as technical feasibility doesn’t equal business value, and business authority doesn’t guarantee technical understanding.
Additional risks:
- Technical teams building impressive AI solutions that don’t solve actual business problems
- Business leaders making technical commitments they can’t deliver
- Internal AI experts dismissed because they lack seniority, while senior leaders make uninformed technical choices
- Over-engineered solutions that look “consultant-friendly” but miss the mark
Antidote for organizations: Create decision-making structures that match expertise to decisions. Technical architecture decisions need technical authority. Business prioritization needs business authority. Strategic AI direction needs both, working in partnership. Build internal AI literacy so leaders can ask better questions without trying to make technical decisions.
Impulsivity
How it manifests: Reacting to competitor announcements or new model releases with immediate action demands: “We need a ChatGPT integration this quarter!” Organizations jump between AI trends without strategic alignment, driven by fear of being left behind rather than clear business rationale.
Additional risks:
- Shadow AI projects with poor governance sprouting across departments
- Wasted budgets on pilots that never scale because they lacked proper planning-
- AI initiative fatigue among staff after repeated false starts
- Security vulnerabilities from rushed implementations
Antidote for organizations: Less haste, more speed. Start with problem discovery, not solution hunting. Prioritize use cases with clear ROI potential and realistic timelines. Establish governance frameworks before launching pilots. Resist the urge to “try everything”. Instead, build repeatable success patterns with well-scoped projects.
Invulnerability
How it manifests: Assuming risks are someone else’s problem: “Our vendor handles security,” “Bias won’t affect our use case,” or “Hallucinations aren’t a concern for our application.” Organizations skip audits, monitoring, and compliance checks in favor of speed to deployment. Additional risks:
Data leaks from careless prompt injection or poor access controls Regulatory penalties for privacy violations or discriminatory outcomes Reputational damage when bias or hallucinations surface in customer-facing applications Legal liability from automated decisions made without proper oversight
Antidote for organizations: Exploitation of vulnerabilites could and probably will happen to us. Treat risk management as first-order business priority, not an afterthought. Build red-team practices, enforce model monitoring, and apply AI assurance frameworks (e.g., NIST AI RMF, ISO standards). Make “failing safely” a core competency.
Macho
How it manifests: Leaders set heroic targets like “50% of our processes automated with AI by next year” or announce “AI-first” transformations without assessing technical or cultural readiness often fueled by competitor announcements or board pressure to show AI leadership.
Additional risks:
- Technical teams forced into shortcuts to meet impossible deadlines
- Burnout among staff asked to deliver transformational results with incremental resources
- Loss of credibility when AI initiatives consistently underperform expectations
- Change management failures when transformation pace exceeds organizational capacity
Antidote for organizations: Taking massive chances is foolish. Adopt a portfolio approach: balance exploratory AI projects with proven operational efficiency efforts. Set KPIs based on learning velocity and capability building rather than transformation speed. Celebrate sustainable wins over flashy gambles that can’t be maintained.
Resignation
How it manifests: Organizations give up independent thinking in two ways: (1) Fatalistic surrender, e.g. “AI will replace our jobs anyway,” “We can’t compete with Big Tech,” or “The model knows better than we do”; and (2) Herd following, e.g. “Everyone’s doing AI chatbots, so should we” or copying strategies without considering organizational context.
Additional risks:
- Loss of critical human judgment in decision loops
- Organizational stagnation as innovation stalls because “AI will just do it better”
- Talent attrition when staff feel powerless or irrelevant
- Strategic drift as decisions get outsourced to algorithms or industry trends
Antidote for organizations: We can make a difference: Treat AI as augmentation, not substitution. Provide clear reskilling pathways and keep humans meaningfully in the loop. Position AI adoption as a competitive differentiator unique to your organization’s context, not a surrender to inevitability or a copy of the playbooks of others.
Summary Table
Hazardous Attitude | Typical Thought | Key Manifestation | Primary Risk | Antidote |
---|---|---|---|---|
Authority Confusion | “Who decides?” | Misaligning expertise with decision-making power: either deferring to brand authority on technical issues or letting technical teams make business decisions | Technical solutions that don’t solve business problems, or business commitments that can’t be delivered | Match expertise to decisions: technical authority for technical choices, business authority for business priorities, partnership for strategy |
Impulsivity | “Do it now” | Reacting to competitor announcements or new AI releases with immediate action demands, jumping between trends without strategic alignment | Shadow AI projects, wasted pilot budgets, initiative fatigue, security vulnerabilities from rushed implementations | Slow down to speed up: start with problem discovery, prioritize use cases with clear ROI, establish governance before launching pilots |
Invulnerability | “It won’t happen to us” | Assuming AI risks are someone else’s problem, skipping audits and monitoring | Data leaks, regulatory penalties, reputational damage from bias or hallucinations, legal liability from unmonitored automated decisions | Treat risk management as first-order priority, build red-team practices, enforce monitoring, apply AI assurance frameworks |
Macho | “We can do anything” | Setting heroic transformation targets without assessing readiness; “AI-first” announcements driven by competitor pressure | Technical shortcuts, staff burnout, credibility loss from underperforming initiatives, change management failures | Taking massive chances is foolish: adopt portfolio approach, set KPIs on learning velocity not transformation speed, celebrate sustainable wins |
Resignation | “What’s the use?” | Surrendering strategic agency through either fatalistic surrender (“AI will replace us”) or herd following (“Everyone’s doing chatbots, so should we”) | Loss of human judgment, organizational stagnation, talent attrition, strategic drift as decisions get outsourced to algorithms or trends | We can make a difference: treat AI as augmentation not substitution, provide reskilling pathways, keep humans in the loop, focus on unique organizational AI advantage |
Conclusion
Just as pilots are trained to recognize, name, and counteract hazardous attitudes in themselves, organizations need to surface these patterns in their AI strategies. A healthy AI culture:
- Matches expertise with decision-making authority
- Balances experimentation with strategic discipline
- Plans for risks before they materialize
- Sets ambitious but achievable goals
- Keeps human judgment central while leveraging AI capabilities