Scroll Top

USA Government Agencies Working to Set AI Compliance Standards. 

United States Government Logo 15Jan2025

The Federal Government and its various agencies are getting their hands around the good, the bad, and the ugly of Artificial Intelligence

The FTC is reigning in false claims AI companies are making about their programs.

The FTC launched Operation AI Comply against companies that are making false claims about their AI Tools.

AI tool company DoNotPay was marketing that its legal services were better than hiring a lawyer for representation.  The FTC says they mislead consumers about how good the services really are. A settlement of $193,000 was reached regarding their claims.

Another company RYTR is under investigation over claims that users were misusing the AI software to make false AI generated content for “detailed customer reviews.”  RYTR supposedly makes writing content faster for blog articles, social media posts and more.

FTC has also sued Ascend Ecom, Empire Builders, and FBA Machine for promising earnings by operating a on-line AI Amazon storefront.  Consumers were defrauded of over $25 Million Dollars.

The National Telecommunications and Information Administration (NTIA), located within the Department of Commerce, is the Executive Branch agency that is principally responsible by law for advising the President on telecommunications and information policy issues. President Biden has been clear that when it comes to AI, we must both support responsible innovation and ensure appropriate guardrails to protect Americans’ rights and safety.

National Telecommunications and Information Administration (NTIA) has issued the AI Accountability Policy Request for Comment to create trust in AI.  NTIA seeks feedback on what policies should be implemented in the development of AI audits, assessments, and certifications to promote trust.  They seek comment on making AI trustworthy and accountable.

On NTIA’s website the number one key issue is Artificial Intelligence and other emerging technologies.

NTIA defers to NIST for guidance on AI.

NIST AI RMF or Artificial Intelligence Risk Management Framework  NIST-AI-600-1 Was developed in part to fulfill an October 30, 2023 Executive Order, the profile  can help organizations identify unique risks posed by generative AI and proposes actions for generative AI risk management that best aligns with their goals and priorities. In collaboration with the private and public sectors, NIST has developed a framework to better manage risks to individuals, organizations, and society associated with artificial intelligence (AI). The NIST AI Risk Management Framework (AI RMF) is intended for voluntary use and to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.

Released on January 26, 2023, the Framework was developed through a consensus-driven, open, transparent, and collaborative process that included a Request for Information, several draft versions for public comments, multiple workshops, and other opportunities to provide input. It is intended to build on, align with, and support AI risk management efforts by others.

The AI RMF refers to an AI system as an engineered or machine-based system that can, for a given set of objectives, generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy.  AI risk management is a key component of responsible development and use of AI systems. Responsible AI practices can help align the decisions about AI system design, development, and uses with intended aim and values. AI risk management can drive responsible uses and practices by prompting organizations and their internal teams who design, develop, and deploy AI to think more critically about context and potential or unexpected negative and positive impacts. Understanding and managing the risks of AI systems will help to enhance trustworthiness, and in turn, cultivate public trust.

The goal of the AI RMF is to offer a resource to the organizations designing, developing, deploying, or using AI systems to help manage the many risks of AI and promote trustworthy and responsible development and use of AI systems. The Framework is intended to be voluntary, rights-preserving, non-sector-specific, and use-case agnostic, providing flexibility to organizations of all sizes and in all sectors and throughout society to implement the approaches in the Framework.

To read more about the AI Framework go to: https://doi.org/10.6028/NIST.AI.600-1

To read more Topgallant Cybersecurity Blog Posts go to: https://www.topgallant-partners.com

 

0

Related Posts

Leave a comment

Privacy Preferences
When you visit our website, it may store information through your browser from specific services, usually in form of cookies. Here you can change your privacy preferences. Please note that blocking some types of cookies may impact your experience on our website and the services we offer.