Loading...
Loading...
Annapolis is defined by three overlapping institutions: the Naval Academy, whose officers and staff manage AI-critical defense systems and train future military leadership; the Maryland State House and state government executive agencies, increasingly deploying AI for benefit administration, legislative analysis, and public-sector operations; and a cluster of federal contractors and defense primes (Northrop Grumman, Leidos, ManTech, and smaller consultancies) that build and deploy AI systems for federal customers. This unique governance context makes Annapolis training fundamentally different from commercial tech hubs. Federal contractors must achieve NIST AI Risk Management Framework compliance before deploying systems to government customers. State government has its own procurement and oversight rules. The Naval Academy operates under military governance frameworks, classification requirements, and operational security constraints that shape how AI systems can be developed, tested, and deployed. Training and change-management work here is inseparable from compliance work—getting the governance, policy, and audit-trail structures right is not a separate initiative, it is the core of the work.
Updated May 2026
Federal contractors in Annapolis building AI systems for government customers face mandatory NIST AI Risk Management Framework compliance. This is not optional guidance; it is contractual obligation. Training programs for federal contractors span ten to sixteen weeks, cost seventy thousand to one hundred thirty thousand dollars, and address: executive briefings on federal AI governance requirements (NIST RMF, EXEC Order 14110, FAR AI clauses); engineer and technical-lead training on NIST RMF core functions (map, measure, manage, govern) applied to system architecture and testing; compliance and documentation roles on building audit trails, maintaining compliance evidence, and preparing for federal audit; and HR and organizational-development support for role design (chief AI officer, responsible AI officer, bias testing lead). The most effective contractors embed NIST RMF compliance into system-development workflows from the start—treating it not as a final audit burden but as an integrated part of good engineering. Trainers for this segment must have federal procurement experience and understand NIST RMF deeply enough to translate it into technical and organizational action.
The Naval Academy operates under military governance constraints that shape AI adoption and training fundamentally differently from commercial or civilian-government contexts. Officers and staff deploying AI systems must manage classified information, personnel with various security clearance levels, and operational security (OPSEC) requirements that restrict data sharing and system testing. Training programs for the Naval Academy and naval-adjacent organizations span eight to fourteen weeks, cost fifty thousand to ninety-five thousand dollars, and address: officer and civilian-staff briefings on AI capabilities, limitations, and risks in naval operations (fleet logistics, maintenance prediction, cyber defense); classified and unclassified AI system training for operators (understanding system outputs, recognizing degradation, knowing when to escalate); information-security and classification training (how to handle classified data in AI systems, what requires approval before deploying); and change-management support for integrating AI without undermining operational readiness. Trainers for this segment must have security clearance (or the organization must sponsor clearance processing), understand military culture and constraints, and be able to train on sensitive material appropriately.
Maryland State Government increasingly deploys AI in benefits administration (eligibility determination, fraud detection), legislative analysis (bill-impact analysis, constituent correspondence), and public-sector operations. Unlike federal contractors or the military, state agencies face public transparency and accountability requirements—if an AI system denies someone benefits, the person has a right to understand why and appeal. Training programs for state government span six to twelve weeks, cost thirty-five thousand to seventy thousand dollars, and emphasize: executive and legislative briefings on AI governance, budget implications, and oversight requirements; case-worker and benefits-administrator training on AI-assisted decisions (reading AI recommendations, understanding why the system recommended X, knowing when to override); HR training on algorithmic transparency and fairness (how to design systems that are explainable to the public, how to document decisions, how to set up appeal processes); and legal and policy teams on compliance with state transparency laws, federal ADA requirements, and emerging AI-specific regulations. The most effective programs anchor training in real case studies from Maryland agencies—showing how benefits AI works at DHHS, how legislative analysis AI works at the legislative reference bureau—rather than generic examples.
Four core functions, abbreviated MMGG: Map (understand your AI system's intended use, scope, and risk landscape); Measure (test the system for performance, bias, adversarial robustness, and other risk dimensions); Manage (implement mitigations for identified risks—retraining, human oversight, thresholds, etc.); Govern (document and maintain compliance evidence, assign roles and responsibilities, set up audit and escalation processes). Federal contractors need to show evidence of all four functions for every AI system deployed to a federal customer. Evidence includes test reports, bias audits, documentation of who owns which risk, and logs of decisions made during deployment. Trainers should walk contractors through how to integrate MMGG into their system-development lifecycle—where testing happens, what artifacts get produced, how to maintain audit trails—rather than treating NIST compliance as a checkbox at the end.
Yes, but with more documentation. Open-source models (Llama, Mistral, etc.) can be deployed to federal customers, but contractors must: (1) audit the model training data for security or bias issues; (2) test the model in their specific use case (model performance can vary by context); (3) assess supply-chain risk (where did this model come from, is the source still maintained, what happens if the source goes offline); (4) set up monitoring and retraining plans if the model degrades over time. Contractors using proprietary models (OpenAI, Anthropic, etc.) face a different risk: is the vendor a federal contractor with the right security certifications and compliance posture? Using an unapproved vendor for a federal system is a compliance risk. Trainers should help contractors think through the trade-offs: open-source = more audit burden, proprietary = compliance verification burden.
Immediate escalation and mitigation. The process: (1) document the bias (what is the system doing wrong, how severe is it, how many customers does it affect); (2) notify your federal customer immediately (contractual obligation—hiding it is grounds for termination); (3) implement an immediate mitigation (pause the system, add human oversight, restrict to lower-risk cases) while you retest and retrain; (4) conduct a root-cause analysis (training data bias, feature engineering, model architecture); (5) retrain and retest; (6) document the whole incident for your compliance file. Speed matters—federal customers expect rapid response. Contractors who have internal escalation procedures and clear role assignments (who makes this decision, who notifies the customer, who supervises the fix) move faster than those making it up ad hoc. This should be part of the governance training.
Honestly: "The system looks at many factors and predicts X. We test the system to make sure it's accurate and fair. If you disagree with the decision, here's how to appeal." You cannot explain a neural network's decision in simple terms—the black-box nature is real. State agencies deploying AI in benefits administration should not pretend the system is fully explainable; they should be transparent about that limitation and compensate by: (1) providing detailed explanations of what factors the system considers (even if you can't explain how it weighs them); (2) allowing human appeals and override (if staff or the person can show the system is wrong, they can escalate); (3) monitoring for bias and unfairness; (4) providing training to case workers on how to handle situations where the system's recommendation seems unreasonable. This is more credible and legally defensible than claiming the system is fully transparent when it is not.
Chief AI Officer (or Chief Data Officer or Chief Information Officer, depending on org structure) owns: (1) AI strategy and governance (what systems get built, what risk profile is acceptable); (2) compliance and audit (is the organization meeting NIST RMF or state requirements); (3) risk management and escalation (what happens when a system breaks or shows bias); (4) cross-functional coordination (making sure engineering, legal, HR, and operations teams are aligned); (5) board or executive briefing on AI risks and investment. The CAO is not necessarily the person who builds AI systems; that's the Chief Data Scientist or VP of Engineering. The CAO is the governance owner. Organizations deploying AI in high-stakes contexts (government, defense, benefits administration) need a clear CAO role with executive authority and direct board access. Training should clarify the role, the authority, and the escalation paths.
Connect with verified professionals in Annapolis, MD
Search Directory