"What is real? How do you define real? If you're talking about what you can feel, what you can smell, what you can taste and see, then real is simply electrical signals interpreted by your brain.”
– The Matrix (Wachowski & Wachowski, 1999)
The question of what is real has never loomed larger in human history. On the one hand, AI represents awesome potential to transform society in ways that contribute to the greater good. On the other hand, there exists a precipitous decline into hopeless nihilism, where the truth isn’t known nor is it knowable. At the same time, competition is driving AI investment and incentivizing companies to invest in automation that replaces humans in everything from call centers to journalism. AI adoption, in turn, calls into question companies’ obligation to disclose their use of artificial intelligence. Sports Illustrated found itself embroiled in controversy after it was accused of publishing articles written by AI and presenting them as human. SI denied the accusations, though admitted that the authors’ names and bios were artificial (Kim, 2023). Technology publisher CNET used AI to write finance articles that it attributed to staff reporters. David Bauder of the Associated Press wrote that the only way to know AI was involved was to click on the author attribution (Bauder, 2023). Elsewhere, contact centers are being radically transformed through chatbots and automated phone systems. Project management from healthcare to construction is assisted by artificial intelligence. And companies are increasingly experimenting with AI to find the optimal mix of human and intelligent automation (Vedvick, 2025; Vedvick, 2025b).
These observations spell a future of social assimilation, where humans and artificial intelligence coexist with the former being dependent on AI for information, services, and general productivity. It predicts a coming state where artificial and human interactions are increasingly indistinguishable, and one where disclosure is the focal point between market competition and social cohesion. Therefore, this paper explores the ideological battle over AI disclosure, what it means, why we care, and whether disclosure is even possible at all.
Background
AI investment and AI adoption are not synonymous. Rather, investment is a statement of the degree to which developers expect AI to be used, while adoption is a measure of our progress toward that vision. Nonetheless, both portray a future where AI is widespread and facilitating services from healthcare to restaurant reservations. The rise of AI is also fueling a sense of regulatory urgency. As of July 2025, all fifty states including Puerto Rico and the Virgin Islands have introduced AI legislation, much of which focuses on disclosure and privacy (NCSL, 2025). In many respects, however, lawmakers remain behind the curve. Engelke (2020) wrote in the Atlantic Council that establishing guidelines and commissions is one thing. Turning recommendations into law is something different (p. 4). Indeed, proposing regulation as many states have done is only one step in the process of enacting law. All the while, AI continues to advance with each new version of GPT, Gemini, Claude, and the others.
Driving the pace of development is an-as-yet inexhaustible supply of investment. CNBC reported that global annual spending on artificial intelligence is expected to surpass $500 billion by 2026, and the cost of maintaining that infrastructure will exceed $2 trillion by 2030 (Morabito, 2025). There is, however, a belief that remarkable gains in efficiency await. And indeed, there are indicators that when done right, AI can contribute significantly to the bottom line. Klarna (2024), for example, reported that its chatbots do the work of 700 agents and have saved the company more than $40 million. Microsoft similarly claims it saved over $500 million in call center costs in 2024 alone (Babu, 2025). And Camping World reported a 33% increase in contact center efficiency as a result of AI (IBM, n.d.). Artificial intelligence has also been used to perform market analysis, segmentation and, in one recent study, was shown to have improved ad campaign performance by 30% (Hossain et al, 2024, pp. 7-11).
As more companies implement AI solutions, pressure increases to establish transparent practices. However, the definition of transparency is, at the very least, contested. For example, some authors combine the ideas of transparency, disclosure, privacy, and bias under the concept of transparency (Lu, 2020; Guha et al, 2024). The fact is, transparency is an umbrella term that encompasses each of those terms. It can be thought of abstractly as the absence of deception, but it does not itself offer much clarity as to its definition, except that we share a common sense of what it means to us. Disclosure, on the other hand, is more concrete. It is, at its most basic level, a notification that AI is at work. As such, disclosure is a practical, if incomplete, starting point for framing a conversation on transparency. It’s also worth noting that for purposes of this paper, transparency and disclosure are used somewhat interchangeably. The close relationship between transparency, trust, and honesty, also led to some creative interplay between the words, but at core, this is a discussion about disclosure.
All of that said, both academic and industry leaders agree that as much as we may want disclosure, it may not be possible. For example, in an article published in the George Washington Law Review, Guha et al (2024) acknowledge the limits of developers to understand how their creations operate and produce the data behind those decisions (p. 1497). Executives surveyed by MIT and Boston Consulting expressed concern over their ability to explain AI’s decisions in a way that most people would understand, adding that as AI becomes more pervasive, it will become more difficult to disclose where and how often it’s being used (Renieris et al, 2024). And Anthropic, creator of the Claude platform wrote in a blog post that unlike traditional software platforms, AI are not directly programmed by humans but instead develop their own strategies for solving problems. This inability to get inside AI’s head limits our ability to understand how it derives answers (ANT, 2025). Said differently, artificial intelligence may have been created by humans, but how it operates is a black box.
In summary, the regulatory landscape is defined by aggressive investment in infrastructure, a promise of outsized efficiency, and an opaque medium. Compounding the legislative difficulty is the pace of AI development, political bureaucracy, and technical difficulties that make disclosure policy hard to implement and difficult to enforce.
Literature review
Despite the challenges, the need for comprehensive disclosure solutions is generally agreed upon by lawmakers, the private sector, and the public. However, these solutions are often contested and more often unspecific. That said, there are three broad categorical approaches; The first is government-led policy. The second is industry self-regulation. And third, is a combination of the two. Many opinions reviewed here advocate for strong governance and willing private sector participation. Very few argue that self-regulation is adequate given the social and financial risks of non-disclosure. However, there are notable advocates for leaving the industry to manage its affairs, for example, former Alphabet CEO Eric Schmidt (Wheeler, 2023). But such libertarian perspectives were fewer or wrapped in government partnership. The overwhelming view is that non-disclosure poses financial and legal risks to businesses, and social cohesion and public trust risks to government. Lawmakers also expressed a desire to not repeat the regulatory mistakes they made with social media (Zorthian, 2023). However, while regulatory goals may be relatively clear, how those goals are achieved is undefined.
Regulatory use case
As mentioned earlier, the pace of AI development is putting extraordinary pressure on lawmakers to establish regulatory norms. But pressure alone doesn’t explain why regulation is necessary. The answer is a combination of factors, but, in short, transparency underpins trust. We extend this trust to businesses, public institutions, and elected officials. Transparency is the foundation of personal agency and having control over one’s data. It forms the basis of business transactions and social norms. Without trust, economic and political stability breakdown. As Keppeler (2024, p. 2) writes, transparency is an ethical concern, and failure to disclose AI undermines trust, legitimacy, and accountability. In an issue of the Vanderbilt Journal of Entertainment & Technology Law, Lu (2020) called privately owned algorithms “inscrutable” (p. 102), writing that such opacity undermines our fundamental rights of privacy and equality, and risks democratic norms (pp. 102-133). Of AI in scientific research, Resnik and Hosseini (2025) make a similar argument, noting that transparency is both good practice and one that promotes honesty and rigor.
As with trust, transparency forms the basis for ensuring privacy. Many of the academics reviewed here say privacy begins with transparent disclosure over how one’s data is being used. Engelke (2020) summarized privacy as a fundamental human right, noting that many European policies are converging around a basic set of ethical principles, including protecting privacy, enforcing transparency, and ensuring human oversight (pp. 3, 6). Though existing disclosure practices might seem sufficient, they are rigid and biased against the consumer. Di Porto (2021) argued for reform in an issue of Artificial Intelligence and Law, writing that the take-it-or-leave-it nature of privacy policies reduces consumer bargaining power and increases information asymmetry (p. 15). In other words, regulation is needed to level the playing field and promote consumer choice.
The issues of trust and privacy are not solely the domain of political philosophers. Businesses require guidance to know how their systems should accommodate these challenges. For example, OpenAI CEO Sam Altman voiced concern over AI’s lack of legal framework, and acknowledged that the industry hasn’t figured out how to protect sensitive user conversations (Perez, 2025). Mozilla president, Mark Surman, said in an article published by MITSloan, that disclosures are the most basic form of transparency and required in all facets of life. Similarly, Chevron’s former Chief Data Officer, Ellen Nielsen, favored mandatory disclosures, arguing that consumers need them to make informed decisions (Renieris et al, 2024).
From the market perspective, regulation is needed to reduce risk and standardize reporting. Companies recognize that bias and privacy breaches can erode trust and investor confidence, making strong governance and proactive oversight essential (Tonello, 2025). Moreover, AI adopters assume much of the risk when those systems hallucinate or produce unexpected results, leading to lost market value or other damages (Kremer et al, 2023). Finally, failure to disclose AI may allow insiders to trade on material, non-public information, and pose a financial risk to the company and its investors (Lu, 2020, pp. 127-128).
Disclosure policy is also an essential component of reducing bias. Bias, however, is not one dimensional, and as the literature reviewed here shows, it can manifest in data, system behavior, user experience, and influence how we interpret information from those systems. These factors make exposing bias difficult. AI systems can be unpredictable and vulnerable to subtle biases in their training data. They can perform well in test and fail in the real world. Or bias can be introduced simply by failing to foresee how data or process will be used in production (Engelke, 2020, pp. 6, 16). It can also lead to human bias. For example, content that is obviously artificial but not disclosed can lead to perceptions of dishonesty. In such cases, disclosure helps reduce bias by being upfront and acknowledging the use of AI (Resnik & Hosseini, 2025, pp. 7, 9). In short, while bias might be difficult to prevent, regulation can expose preference by making it transparent to the user.
That said, bias is not limited to humans or data. For example, AI systems can be engineered with a degree of bias by design. Anthropic (ANT, 2025) acknowledged that its platform is biased to agree with the user rather than follow logical steps. Agreeability may not matter when the stakes are low, but in critical moments, it may matter a lot. Queen Mary University professor, Gina Neff makes a similar point, arguing that AI’s preference for pleasing users may not matter when searching for movies, but does when it comes to healthcare, science, and news (Islam et al, 2025). Therefore, regulation is needed to expose flaws in the system that may lead to misleading results.
The preceding discussion has framed the need for disclosure around trust, privacy, and bias. Indeed, these concepts are at the very least vague and fungible, particularly when compared to the business case for adopting AI. Yet, Engelke (2020) writes that these concepts force governments to establish their values and define unacceptable outcomes (p. 20). There is also broad public support for transparency. For example, reputational damage resulting from bias, unsafe outputs, and misuse of AI was a top concern among the S&P500 (Tonello, 2025). Moreover, in a survey of consumers, 96% of respondents favored human review of automated decisions, and 80% wanted a substantive disclosure of how AI was being used (Wulf and Seizov, 2022). Strong public support for transparency reflects the more subversive truth that secrecy often benefits private business. For example, companies often seek protection under trade secrecy laws because such laws promote opacity and are more aligned with corporate priorities (Lu, 2020, p. 117). Therefore, while there may exist strong corporate support for disclosure, lawmakers should recognize that regulation is still needed to ensure the integrity of the public good.
Finally, geopolitical considerations reinforce the need for good AI policy. Such policies not only define the application of AI within a country, but can influence an entire region (Engelke, 2020). Indeed, policy proliferation is common in other arenas. For example, environmental policies spread and influences regulations in neighboring countries (Vedvick, 2024). More importantly, if a country can enforce a regional AI standard, they would have a significant competitive advantage over non-regional players (Engelke, 2020, p. 10). Disclosure policies can come with unintended consequences as well. For example, making domestic AI technology too transparent could expose trade secrets and undermine competitiveness (Guha et al, 2024, p. 1506). Therefore, policymakers are incentivized to pass AI legislation that not only protects the public good, but protects our economic and national security interests as well.
Foundations of good disclosure policy
Good policy begins with having clearly defined goals. In general, the literature reviewed here emphasizes transparency, consumer agency, and accountability as outcomes of regulation. The importance of voluntary disclosure and private sector cooperation were also common themes, with a few authors advocating for industry-led solutions. The overwhelming consensus, however, is that consumers need to know when they’re interacting with AI and be able to act on that information.
Actionable disclosures are often simple and approachable, however the desires for transparency and brevity are sometimes at odds. For example, researchers argue that on the one hand, disclosures should be explainable to the common person, be kept to a minimum, and preserve the balance between for-profit priorities and democratic values. Yet, on the other hand, firms should be required to disclose how their systems are being used, their desired outcomes, and their short-comings (Lu, 2020, p 134-139). Full disclosure, in this case, quickly runs afoul of the desire for simplicity and pro-market competition. Said differently, there is an inherent contradiction between increased privacy and transparency, particularly when addressing undesirable outcomes like discrimination. Policies that favor privacy may end up reinforcing opacity (Guha et al, 2024, pp. 1479-1480).
Nonetheless, simplicity is a starting point for AI regulation. To begin with, disclosures that are easy to grasp are a core component of responsible AI (Renieris et al, 2024). They’re most effective when they’re understandable, actionable, and verifiable (Guha et al, 2024, p. 1503); however, they’re often convoluted, ineffective, and seldom read (Di Porto, 2021, p. 19). As a result, the utility of any legislation that does not both inform and enable the user to act is diminished (Wulf & Seizov, 2022, p. 237; Guha et al, 2024). For this reason, researchers say, disclosures should be placed at key decision points throughout a process. (Guha et al, 2024, p. 1504). In fact, basic transparency may be giving the consumer what they want. Studies have shown that simple disclosures are not only effective, but also convey a sense of privacy, fairness, and accuracy to the customer (El Ali et al, 2024, p. 8).
That said, there is a case to be made for in-depth disclosure that both acknowledges the use of AI and provides the underlying data. After all, Wulf and Seizov (2022) found that a super majority of respondents preferred substantive disclosures over basic acknowledgement (p. 240). IBM’s Chief Privacy and Trust Officer, Christina Montgomery (Zorthian, 2023), argued for “radical transparency” calling for the AI equivalent of nutrition labels on products, adding that platforms should have to explain the data and algorithms behind their outputs. And, in a survey of AI experts, most respondents agreed that AI disclosures should include how consumers’ personal data was being used (Renieris et al, 2024).
Indeed, there is a compelling case to be made for full disclosure. For example, AI can target ad campaigns based on data collected from social media. Observers argue that the public have a right to know when their information, videos, and photos are being used without their permission (Russell, 2025). These concerns have led to calls for more stringent policies. For example, Lu (2020) argues that firms should be compelled to publish their testing data, make material information about their products available, including the desired outcomes and short-comings of their algorithms, their data management plans, and disclose how they collect and manage that data (pp. 135, 149).
Bridging these perspectives is a multi-tiered policy approach. One solution is to classify disclosures as mandatory, optional, and unnecessary. For example, in scientific research, contributions must be intentional and substantial to constitute disclosure (Resnik & Hosseini, 2025, pp. 5-6). On the other hand, AI used for secondary research, to look up contact information or prepare for meetings would be considered unnecessary (Renieris et al, 2024). Similarly, a simple two layered approach may be sufficient. The first layer would be mandatory and focused on algorithms that make decisions for humans. This layer requires firms to disclose how systems are being used and their potential risks. Importantly, only information that pertains to the public interest would be disclosed. The second layer would be optional and constitute the underlying aspects of AI, like training data (Lu, 2020, p. 135). A third approach is to categorize disclosure based on high-level technical factors. For example, institutional disclosures would target organizational practices; system level disclosures would inform how a specific system was developed; and prediction level disclosures would target the specific engagement, including disclosing when AI systems were used to generate an output (Guha et al, 2024, p. 1496).
Finally, good policy must drive accountability. As Keppeler (2024 p. 2) notes, accountability suffers from insufficient transparency. In fact, the notion of accountability was a common theme throughout the literature. It is a cornerstone of good research and integral to the scientific process (Resnik & Hosseini, 2025, p. 3). Accountability not only promotes ethical conduct, but is a basis for legislative measures like regulations and licensing requirements (Guha et al, 2024, p. 1523). Most importantly, holding people accountable for the actions of AI is the basis of effective regulation (Saheb & Saheb, 2024, p. 12; Kremer et al, 2023).
Certainly, it’s naïve to think that appeals to conscience or altruistic goodwill are sufficient guardrails against for-profit and competitive motivations. Legislation needs to reflect the public’s commitment to good governance. To that point, Kremer et al (2023) call for a governance structure that establishes oversight and accountability. They note that the EU has proposed fines as high as 7% of annual global revenue for violators of AI regulation. California’s Transparency in Frontier Artificial Intelligence Act addresses developers of large AI models, and imposes fines of up to $1 million per violation (Wiener, 2025). On the other hand, developers of high-risk AI systems face fines of just $20,000 per violation in Colorado (Siegal & Garcia, 2024). The disparity of these assessments illustrates how widely enforcement measures can vary state-to-state. Nonetheless, meaningful penalties are a critical component of good policy. By forcing executives to answer to financial damages, policymakers drive accountability and protect the public good.
Private sector participation
It must be acknowledged that proactive private sector participation, including self-regulation, is a component of transparent AI. Google and Microsoft have emerged as two of the most visible adopters of AI guidelines, though this is partly due to their status as industry leaders, and not without controversy. For example, Google dropped its AI advisory board after receiving criticism over its composition (Engelke, 2020, pp. 3-4). Nonetheless, EU commissioner for competition, Margrethe Vestager met with Google leadership and emphasized the need for public and private sector collaboration (Chee, 2023). In his open letter to lawmakers and industry leaders, Elon Musk (Life, 2023) called for proactive, industry-led initiatives to ensure that AI is safe, trustworthy, and loyal, while simultaneously calling for tech leaders to collaborate with policymakers on AI governance. Likewise, Sam Altman testified under oath before congress, and recommended that policymakers develop an AI regulatory entity similar to the FDA, including safety standards and audits from independent experts (Zorthian, 2023). In an interview with the BBC, Google CEO Sundar Pichai said that while consumers should not blindly trust what AI tells them, companies need to take more responsibility for the accuracy of their systems, not outsource that obligation to the customer (Islam et al, 2025). Finally, in an international survey of AI experts published in MITSloan, researchers found that 84% of respondents agreed that disclosures should be required, with several advocating for approaches mirroring nutrition labels on food products and prescription drug disclosures on pharmaceuticals (Renieris et al, 2024).
Still others go a step further. Eric Schmidt argues for a self-regulated approach to controlling AI, admonishing governments as incapable of getting policy right (Wheeler, 2023). Amit Shah of Instalily AI said mandatory disclosures would slow innovation and overburden businesses, adding that the pace of AI development would quickly render such policies obsolete (Renieris et al, 2024). In 2023, seven tech companies including Anthropic, Google, Metal and others, reached an agreement with the White House to voluntarily regulate their AI systems. The self-regulatory measures include providing access to independent auditors, information sharing, governance transparency, and others. However, a year after the agreement was reached, MIT reported that third-party access was still limited and information sharing needed improvement. That said, MIT found that, overall, companies scored highly on their efforts to reduce bias and prevent discrimination (Heikkilä, 2024).
Finally, unlike governments, the private sector are capable of innovating and moving quickly. McKinsey recommend that businesses not wait for regulators and instead take proactive action. They recommend creating central repositories for all applications of AI across their organizations, and publishing clear developer documentation, including risks and intended uses of the technology (Kremer, 2023). Regulatory sandboxes that combine public and private efforts, could provide an environment in which AI models would be tested, reviewed, and approved by regulators. Sandboxes could also be used to test consumer responses to various disclosure approaches, discover new approaches, or amend existing policies (Di Porto, 2021, pp. 16, 37-38, 43). Lastly, provenance, or watermarking source data, is another area of interest for lawmakers and private businesses. By tagging the underlying data, publishers would expose images and text created by AI. However, these efforts are not widely implemented, nor have they been proven effective (Guha et al, 2024; El Ali et al, 2024). In short, regulation is coming for AI, and it behooves private businesses to get ahead of the curve by declaring their use of artificial intelligence, and taking steps to make the underlying data more accessible. However, the best methods for achieving those objectives remain theoretical.
Challenges
Ineffective legislation begins with poorly defined goals. Laws are written without addressing which specific outcomes they seek to achieve. Guha et al (2024) call this a regulatory mismatch, and note that it can arise from focusing on symptoms in lieu of root causes (pp. 1487-1489). Alternatively, officials may form commissions and draft frameworks. However, while establishing ethics and norms might be virtuous, they’re also vague and non-binding (Engelke, 2020, p. 20). That said, turning ideas into laws does not automatically mean they’re actionable. AI legislation often fails to define what should be disclosed and who is responsible for making the disclosure (El Ali et al, 2024, p. 6). For instance, despite being passed into law in 2018, exactly what constitutes clear and understandable disclosure in the EU’s GDPR, is still poorly defined (Wulf & Seizov, 2022 p. 238). It may be that vaguely written legislation provides lawmakers with a broad canvas on which to apply the law, but it’s equally easy to see how efforts to drive accountability and enforcement would stumble with such ambiguous terms.
Following the challenge of getting legislation right is the question of who enforces the newly written law. Lawmakers can either assign enforcement to an existing institution, or pass legislation that establishes a new agency. However, assuming new agencies will be effective based on the success of prior entities, like the SEC for example, is faulty. Securities have leverage and recourse through powerful shareholders, lawsuits, and private institutions that won’t necessarily apply to AI (Guha et al, 2024, p. 1500). It’s worth considering, therefore, whether new agencies have the power to regulate this new industry (Engelke, 2020, p. 20). It’s also possible that companies would simply downplay their level of AI use, data collection, or reliance on automation. Trade secrecy laws protect companies from being coerced into handing over proprietary secrets. This allows firms to argue that any such data are not material non-public information (Lu, 2020, pp. 132-154). In short, enforcing laws poses challenges that go beyond how they’re written. Agencies must be staffed and vested with the authority to enforce those regulations, or their utility is greatly reduced.
Regardless of the regulatory agency, expertise is a critical component of legislative success. However, a lack of expertise is a liability for lawmakers and one of the biggest challenges facing institutions (Di Porto, 2021, p. 18). For example, fewer than 1% of PhD AI grads go into public service, which puts governments at a disadvantage compared to industry experts (Guha et al, 2024, pp. 1480-1494). Effective AI policy, therefore, constitutes both legislative and recruiting efforts to ensure governments have the credibility to enforce the law.
Perhaps the most substantial challenge facing disclosure, however, is a lack of technical feasibility. Even the most well-intentioned organizations will struggle to provide the type of transparency their peers and lawmakers are seeking. Anthropic’s own engineers acknowledged that they’re not sure how their platform arrives at its answers (ANT, 2025). OpenAI voiced similar uncertainty, writing, “there is an inherent element of randomness to how [GPT] responds [and] as a result, the same question may yield different answers across different queries” (GPT, n.d para. 5). The company goes on to say that GPT does not store or retain copies of its training data, nor does it copy and paste answers from a centralized source (para. 6). Anthropic offered a similar explanation of its system, writing that LLM’s are trained on large datasets and develop their own strategies for solving problems. This independence, they write, makes it difficult for engineers to say exactly how the platform solves problems, describing the effort as complex and labor intensive (ANT, 2025).
Technical challenges aren’t limited to data access. AI’s inability to understand the nuances of real-world circumstances pose a different type of challenge. For example, regulatory circumstances, like safety guidelines, vary widely from situation to situation, and it’s unrealistic to expect a probability-based system like an LLM to correctly anticipate those requirements all of the time. Moreover, context plays a critical role. For instance, what constitutes appropriate behavior on Reddit might differ significantly from other platforms (Alikhani & Hassan, 2025). For these reasons, researchers argue that the effectiveness of disclosure policy depends on whether the right data can be made available. They go so far as to suggest that the technology to enable regulatory compliance does not yet exist (Guha et al, 2024, pp. 1479-1497).
In general, private sector cooperation is a significant challenge facing regulators. The asymmetry in knowledge and data access are noteworthy concerns, however competition, cost, complexity, and business impacts all undercut discretional commitments like transparency and the public good. As a result, firms have a strong incentive to not comply, particularly when AI results are deficient, unequal, or harmful (Lu, 2020, p. 131). Moreover, regulation may stifle innovation by raising startup costs and unintentionally favoring incumbents. It might lead companies to favor more explainable but less accurate AI systems (Guha et al, 2024, pp. 1498-1508).
Finally, disclosure risks damaging sales. For example, AI disclaimers produced a strong negative reaction in consumers, and are themselves an incentive for companies to not disclose AI (Wulf & Seizov, 2022). This risk was quantified in a study published in Marketing Science, that found that disclosing the use of AI chatbots reduced purchase rates by nearly 80%. Not only did AI disclosure reduce sales but consumers perceived the bots as less knowledgeable and empathetic than their human counterparts (Luo et al, 2019, p. 938). And in a similar study of job applicants, candidates expressed significantly less interest in the job once they knew AI was being used (Keppeler, 2024 pp. 1, 7, 27). In short, disclosure, particularly if it comes as a surprise, has a negative effect on sales and recruitment. These outcomes give companies a strong incentive to not disclose AI and pose a significant challenge to underpowered public agencies.
In summary, there are numerous challenges facing regulators from whether information can be provided to whether it’s in companies’ best interests to provide it. Governments must also contend with a lack of expertise that makes it difficult to enforce the law and regulate compliance. Therefore, legislators must consider not only the appropriate scope of the law, but how that law will be enforced and whether they possess the expertise to enforce it.
Discussion and outlook
Despite the pages of documents, perspectives, and potential solutions reviewed here, the discussion insists on returning to the basic issue of trust. Disclosure is the foundation of confidence in everything from the systems we interact with to the institutions that govern us. Our reproach for dishonesty can be deep and swift, yet also fleeting and forgotten. The examples are numerous, but they all strike a similar chord: disclosure as a moral concern is discretional. For instance, when the steroid scandal hit Major League baseball, fans might have felt lied to by the players, but they didn’t stop going to games (Vedvick, 2024b). Consumers routinely accept lopsided terms and vague disclosures for the convenience of shopping on Amazon. We look past the lies of politicians if their interests align with our own. And we’re willing to inflate, exaggerate, or even misrepresent our qualifications to win a coveted promotion. In short, it’s intellectually dishonest to pretend as though we cherish an intimate relationship with trust, truth, and integrity. Yet, as Wulf and Seizov (2022) pointed out, we also care deeply about transparency. This duality presents a curious conundrum. We want both disclosure and the right to decide that it doesn’t matter.
Trust, however, is not sufficient to persuade business leaders to act in the public good. After all, tobacco executives lobbied tirelessly against regulating cigarettes, arguing that smoking doesn’t cause lung cancer. The oil industry denied that lead produces harmful effects in humans. And countless examples exist, from the Deepwater Horizon disaster to Du Pont poisoning an entire community (Soechtig & Seifert, 2018), that illustrate the private sector’s failure to just do the right thing. Instead, decades of lawsuits were required to change their behavior and enforce regulation. Similar breaches of trust can be found in social media as well. Drenik (2024) pointed out in Forbes that, at one time, social media companies pledged to protect user data and moderate content. It was later discovered that Facebook had allowed personal data to be collected and used for political ad campaigns, without user’s knowledge or consent. And in 2019, Apple acknowledged that it was listening to customers’ interactions with Siri without permission or disclosure (Engelke, 2020 p. 5).
Where exactly these encroachments on transparency become public outrage is difficult to predict, but there is a cost to businesses. Wheeler (2018) writes that the unbridled nature of the Gilded Age eventually went too far and resulted in a popular uprising. This outrage was captured by elected officials and reflected in regulation. For example, the Sherman Antitrust Act was used to break up Standard Oil which, ironically, made John Rockefeller wealthier. Congress created the Interstate Commerce Commission to combat anti-competitive practices by the railroad industry. However, as Gordon (2018) writes, the ICC itself became a government-led cartel that stifled competition and innovation. The impact of regulation, therefore, can both enrich business owners or make their operations more difficult.
The issue remains, however, that the consequences of non-disclosure are vague and abstract. After all, what is the real value of trust to the bottom line. Research cited by Wulf and Seizov (2022) might indicate strong support for disclosure, but it doesn’t articulate the cost of non-compliance beyond hypothetical risks and financial loss. Our selective indifference to non-disclosure suggests that such risks may be mutable. Facebook, after all, pushed through the Cambridge Analytica scandal, and has seen steady user and revenue growth since (Richter, 2021). There is little reason to think that thinly disclosed AI would hamper sales unless it stands in the way of consumers getting what they need. In other words, as much as we may like trust to matter for the abstract value it represents, it only matters if we’re dissatisfied. If companies do not face the prospect of losing customers, they are not very likely to honor the public good for its own sake. In fact, the potential for lost business, as cited by multiple researchers, is a strong incentive to not disclose AI. Yet market forces can also influence behavior in the opposite direction. For example, Shiyyab et al (2023) write that banks were more likely to disclose their use of AI if it gave them a competitive advantage. Studies found that disclosure was tightly coupled with perceptions of competence which played well with customers and regulators (pp. 7, 8). In a crowded field like finance, competence might be the difference between winning and losing business. However, banks were not incentivized to disclose out of a sense of moral obligation.
As the previous paragraphs show, our relationship with trust, disclosure, and transparency is highly contingent upon circumstance. Likewise, the private sector’s commitment to disclosure is, at best, inconsistent and certainly beholden to market forces, not a sense of moral duty. Moreover, industry’s commitment to government collaboration can be equally transient. Wheeler (2023), for example, criticized Sam Altman’s calls for government oversight, after the OpenAI CEO threatened to close its European operations if regulations became too onerous. Attempts to influence policy by flexing economic might is one reason to be skeptical of the industry’s pledge to self-regulate. Director of the CITRUS policy lab at Berkeley, Brandie Nonnecke, called self-regulation tantamount to students getting to write their own exam (Heikkilä, 2024). And, when the UK announced a deal to share public data with OpenAI, critics called the agreement akin to leaving the fox in charge of the henhouse (Booth, 2025).
Outlook
While the preceding discussion forms the basis for regulation, it should be clear by now that legislating trust is vague and imprecise. It is both a value that is deeply held and one that is discretional. Its importance is difficult to quantify until it clearly is not. Indeed, many of the views expressed here emphasized the importance of disclosure, trust, and bias prevention, but offered little in exactly how these issues should be solved. This is not an indictment of those views, moreover it illustrates the persistent lack of clarity lawmakers, observers, and the industry have over how to approach and solve AI disclosure. We are, to a certain extent, stuck in a high-level orbit around the issue of transparency, until engineers figure out how it can be accomplished. This does not mean that there is nothing to be done, however.
To begin with, industry pledges to cooperate with lawmakers should be accepted at face value. As Drenik (2024) writes, responsible AI development is a team sport that involves citizens, governments, and the industry. At the same time, governments are responsible for the welfare of the societies they oversee. Particularly when private priorities create blind spots that are unreceptive to the greater good. Regulation is essential, therefore, to protect public interests that are not captured in the corporate balance sheet. The expertise gap cited by Di Porto (2021) and Guha et al (2024) further underscores the need for public and private sector collaboration. There is even a thread of common ground to be found in Sam Altman’s threats to pull out of the EU (Wheeler, 2023). The technical limitations of AI platforms to provide full disclosure, the lack of technical expertise in government, and the private sector’s chaffing over vague regulations all argue for simple disclosure policy.
It is the view of this author that basic transparency honors the technical limitations of AI platforms and governments. Sundar Pichai is correct that companies should not outsource the accuracy of their systems to the customer (Islam et al, 2025), however, citizens need to take responsibility for thinking critically as well. Simple disclosures, such as notifying the user that the content they’re viewing was created by AI, the agent they’re interacting with is artificial, or the system they’re using is predisposed to agree with them, are all technically feasible while preserving human agency. They should also be relatively non-controversial and more likely to succeed legislatively.
Moreover, simple disclosures, and particularly, an iterative approach are more flexible than attempting robust legislation. For example, the rooftop solar revolution started in 1978 with the Public Utilities Regulatory Policies Act, which demonopolized power generation. Subsequent regulations rewarded solar adoption and allowed private citizens to sell their electricity to the grid (Shively and Ferrare, 2019). However, attempting to accomplish all of this in one legislative push would have been impossible. The incremental march toward decentralized power generation, the politics of green energy, and the complexity of global supply chains would have been difficult to foresee in 1978, but iteration allowed policymakers to adapt to realities on the ground. In short, simple disclosure helps prevent over-indexing on the current economic and political landscape. This is particularly important given recent fears of an AI bubble and over-inflated ROI (Challapally et al, 2025).
While a deep dive on AI legislation is out of scope for this paper, a few points are worth considering. First, the Biden Administration published a blueprint for an AI bill of rights, which emphasizes the importance of diverse perspectives in AI systems, control over personal data, proactive protection from harm, and others (W.H., n.d.). These concepts are not law, nor are they binding in any way, but they have served as a basis for AI legislation in New York, which seeks to formalize many of those ideals into law (Vanel et al, 2025). At the federal level, the AI Disclosure Act of 2023 seeks to leverage existing FTC regulations on unfair or deceptive acts and practices. Violators would be held liable under provisions of the Federal Trade Commission Act, however, as of December 2025, the legislation has been introduced, but advanced no further (Torres, 2023). At the same time, more aggressive attempts at legislation have been met with resistance. For example, Governor Newsom of California recently vetoed a bill prohibiting the release of companion chatbots, unless it could be reasonably assured that they wouldn’t produce harmful content (Wong & Christopher, 2025). It should be noted, however, that none of the legislative approaches reviewed by this author, with the possible exception of the AI Disclosure Act, constitutes the concept of simplicity. They all incorporate much of the vague language discussed, and at times admonished, throughout the literature review.
Meanwhile, market forces bear watching. Regulation is driving a potential divide between AI publishers and AI adopters. Mark Surman’s and Christina Montgomery’s calls for transparency are indicators of where adopter’s sentiments lie (Renieris et al, 2024; Zorthian, 2023). Mozilla and Chevron, after all, have large consumer and corporate customer bases respectively. They are, arguably, shouldering the lion’s share of risk if AI hallucinates or generates undesirable outputs. At the same time, the Los Angeles Times reported that Governor Newsom’s veto was the direct result of intense lobbying from TechNet, a lobbying group whose members include, among others, OpenAI, Google, and Meta (Wong & Christopher, 2025). This bifurcation in the Tech industry may prove substantial in the years to come. As more states enact AI policy, the debate will likely be most intense in markets where customers can easily take their business elsewhere. While the degree to which any of this matters is still to be determined, one litmus test will be how the first major disclosure scandal is perceived by consumers and investors. If bankruptcy follows, that will surely get the private sector’s attention. If, as was the case with Facebook, business continues as usual (Richter, 2021), compliance becomes far less urgent.
One could be forgiven for coming away from this discussion harboring a degree of cynicism. After all, historical precedent and private sector power are justifiable concerns. Breaches of trust and disclosure can be deeply destabilizing and unpredictable. However, one must consider both sides of the historical coin. For example, the Gilded Age ended in widespread protests, government trust-busting, and reforms to America’s industrial power base. The financial crisis of the Great Recession spawned national Occupy protests and reforms to the banking industry. And regulatory reforms caught up to the harms of leaded gasoline and cigarettes. In all cases, public outcry led to discussion and change. Certainly, AI disclosure is no less substantial in its potential to drive protests like those experienced in times past. Whether our approach to AI regulation follows the same path is, yet, unwritten.
That said, there is ample cause for optimism. AI legislation is advancing at the federal and state levels. These attempts are certainly imperfect, but as has been the case with environmental and energy policy, regulation can improve. Nonetheless, it is the view of this author that current initiatives are attempting to bite off too much of the regulatory pie. Trust, safety, and harm are vague, unenforceable and contested. At the very least, they mean something different to everyone who thinks them. Ultimately, lawmakers need to trust the public to make informed decisions about their use of AI, much in the way nutrition labels, prescription drug disclosures, and warnings on cigarettes all provide the consumer with the opportunity to take control. In short, we need to accept that with personal agency comes the possibility that we may make the wrong decision. It may be true that laws are the basis of civil society, but they do not replace agency nor personal accountability.
Disclosure: the thumbnail image for this article was created by Gemini 2.5 (prompt: AI, transparency, regulation).
References
Alikhani, M. & Hassan, S. (2025). Hype and harm: Why we must ask harder questions about AI
and its alignment with human values. Brookings.
ANT (2025). Tracing the thoughts of a large language model. Anthropic.
https://www.anthropic.com/research/tracing-thoughts-language-model
Babu, J. (2025). Microsoft racks up over $500 million in AI savings while slashing jobs,
Bloomberg News reports. Reuters. https://www.reuters.com/business/microsoft-racks-up-over-500-million-ai-savings-while-slashing-jobs-bloomberg-2025-07-09/
Bauder, D. (2023). Sports Illustrated found publishing AI generated stories, photos and authors.
Booth, R. (2025). UK government urged to offer more transparency over OpenAI deal. The
Challapally, A., Pease, C., Raskar, R. & Pradyumna, C. (2025). The GenAI divide: State
of AI in business 2025. MIT. https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Business_2025_Report.pdf
Chee, F.Y. (2023). EU, Google, to develop voluntary AI pact ahead of new AI rules, EU’s Breton
says. Reuters. https://www.reuters.com/technology/eu-google-develop-voluntary-ai-pact-ahead-new-ai-rules-eus-breton-says-2023-05-24/
Drenik, G. (2024). The pitfalls of AI self-regulation. Forbes.
https://www.forbes.com/sites/garydrenik/2024/10/22/the-pitfalls-of-ai-self-regulation/
Di Porto, F. (2021). Algorithmic disclosure rules. Artificial Intelligence and Law, 31, 13-51.
https://doi.org/10.1007/s10506-021-09302-7
El Ali, A., Venkatraj, K.P., Morosoli, S., Naudts, L., Helberger, N., & Cesar, P. (2024).
Transparent AI disclosure obligations: Who, what, when, where, why, how. Cornell University. https://arxiv.org/abs/2403.06823
Engelke, P. (2020). AI, society, and governance. An introduction. Atlantic Counsel.
https://www.jstor.org/stable/resrep29327
Guha, N., Lawrence, C.M., Gailmard, L.A., Rodolfa, K.T., Surani, F., Bommasani, R., Raji, I.D.,
Cuéllar, M.F., Honigsberg, C., Liang, P. & Ho, D.E. (2024). AI regulation has its own alignment problem: The technical and institutional feasibility of disclosure, registration, licensing, and auditing. The George Washington Law Review, 92(6), 1473-1557. https://www.gwlr.org/wp-content/uploads/2024/12/92-Geo.-Wash.-L.-Rev.-1473.pdf
Gordon, J.S. (2018). Regulators take on Silicon Valley, as they did earlier innovators. The Wall
Street Journal. https://www.wsj.com/articles/regulators-take-on-silicon-valley-as-they-did-earlier-innovators-1524005477
GPT. (n.d.). How ChatGPT and our foundation models are developed. OpenAI.
https://help.openai.com/en/articles/7842364-how-chatgpt-and-our-foundation-models-are-developed
Heikkilä, M. (2024). AI companies promised to self-regulate one year ago. What’s changed? MIT
Technology Review. https://www.technologyreview.com/2024/07/22/1095193/ai-companies-promised-the-white-house-to-self-regulate-one-year-ago-whats-changed/
Hossain, M.Z., Hasan, L., Dewan, Md. A., & Monira, N.A. (2024). The impact of
artificial intelligence on project management efficiency. International Journal of Management Information Systems and Data Science, 1(5), 1-17. doi: https://doi.org/10.62304/ijmisds.v1i05.211.
IBM. (n.d.). Driving a reimagined customer experience with an AI-powered virtual
assistant. IBM. https://www.ibm.com/case-studies/camping-world
Islam, F., Clun, R., & McMahon, L. (2025). Don’t blindly trust what AI tells you, says Google’s
Sundar Pichai. BBC. https://www.bbc.com/news/articles/c8drzv37z4jo
Keppeler, F. (2024). No thanks dear AI! Understanding the effects of disclosure and deployment
of artificial intelligence in public sector recruitment. Florian Keppeler. https://florian-keppeler.com/wp-content/uploads/2023/05/Keppeler_NoThanksDearAI_Preprint-1.pdf
Kim, C. (2023). Sports Illustrated accused of publishing AI-written articles. BBC.
https://www.bbc.com/news/world-us-canada-67560354
Klarna. (2024). Klarna AI assistant handles two-thirds of customer service chats in its
first month. Klarna. https://www.klarna.com/international/press/klarna-ai-assistant-handles-two-thirds-of-customer-service-chats-in-its-first-month/
Kremer, A., Luget A., Mikkelsen, D., Soller, H., Strandell-Jansson, M., & Zingg, S. (2023). As
gen AI advances, regulators-and risk functions-rush to keep pace. McKinsey & Company. https://www.mckinsey.com/capabilities/risk-and-resilience/our-insights/as-gen-ai-advances-regulators-and-risk-functions-rush-to-keep-pace
Life. (2023). Pause giant AI experiments: An open letter. Future of Life.
https://futureoflife.org/open-letter/pause-giant-ai-experiments/
Lu. S. (2020). Algorithmic opacity, private accountability, and corporate social disclosure in the
age of artificial intelligence. Vanderbilt Journal of Entertainment & Technology Law, 23(1), 99-159. https://scholarship.law.vanderbilt.edu/jetlaw/vol23/iss1/3/
Luo, X., Tong, S., Fang, Z. & Qu, Z. (2019). Frontiers: Machines vs humans: The impact of
artificial intelligence chatbot disclosure on customer purchases. Marketing Science, 38(6), 937-947. https://doi.org/10.1287/mksc.2019.1192
Morabito, C. [CNBC]. (2025, Oct 14). Why the AI boom might be a bubble. [Video]. Youtube.
https://www.youtube.com/watch?v=oLDcbkEqi-M
NCSL. (2025). Artificial intelligence 2025 legislation. NCSL. https://www.ncsl.org/technology-
and-communication/artificial-intelligence-2025-legislation
Perez, S. (2025). Sam Altman warns there’s no legal confidentiality when using ChatGPT as a
therapist. TechCrunch. https://techcrunch.com/2025/07/25/sam-altman-warns-theres-no-legal-confidentiality-when-using-chatgpt-as-a-therapist/
Renieris, E.M., Kiron, D., & Mills, S. (2024). Artificial intelligence disclosures are key to
customer trust. MITSloan. https://sloanreview.mit.edu/article/artificial-intelligence-disclosures-are-key-to-customer-trust/
Resnik, D.B. & Hosseini, M. (2025). Disclosing artificial intelligence use in scientific research
and publication: When should disclosure be mandatory, optional, or unnecessary? Accountability in Research, 1-13. https://doi.org/10.1080/08989621.2025.2481949
Richter, F. (2021). Facebook keeps on growing. Statista.
https://www.statista.com/chart/10047/facebooks-monthly-active-users/
Russell, M. (2025). AI will shape the future of marketing. Harvard.
https://professional.dce.harvard.edu/blog/ai-will-shape-the-future-of-marketing/
Saheb, T. & Saheb, T. (2024). Mapping ethical artificial intelligence policy landscape: AI mixed
method analysis. Science and Engineering Ethics, 30(9), 1-26. https://doi.org/10.1007/s11948-024-00472-6
Shiyyab, F.S., Alzoubi, A.B., Obidat, Q.M. & Alshurafat, H. (2023). The impact of artificial
intelligence disclosure on financial performance. The International Journal of Financial Studies, 11(3), 115. https://doi.org/10.3390/ijfs11030115
Shively, B. & Ferrare, J. (2019). Understanding today’s electricity business. Enerdynamics.
Siegal, A. & Garcia, I. (2024). A deep dive into Colorado’s artificial intelligence act. National
Association of Attorneys General. https://www.naag.org/attorney-general-journal/a-deep-dive-into-colorados-artificial-intelligence-act/
Soechtig, S. & Seifert, J. (Directors). (2018). The devil we know [Film]. Netflix.
Tonello, M. (2025). AI risk disclosures in the S&P 500: Reputation, cybersecurity, and
regulation. Harvard. https://corpgov.law.harvard.edu/2025/10/15/ai-risk-disclosures-in-the-sp-500-reputation-cybersecurity-and-regulation/
Torres, R. (2023). H.R.3831 - AI disclosure act of 2023. Congress.
https://www.congress.gov/bill/118th-congress/house-bill/3831/text
Vanel, C., Blumencranz, J.R., Hermelyn, R.B., & Hyndman, A. (2025). Assembly Bill A3265. The
New York State Senate. https://www.nysenate.gov/legislation/bills/2025/A3265
Vedvick, J. (2024). Policy response 1: A review of policy diffusion in the public and private
sector. Jeff Vedvick. https://jeffvedvick.com/essays-1/2024/7/28/policy-response-1-a-review-of-policy-diffusion-in-the-public-and-private-sector
Vedvick, J. (2024b). The ethics of performance enhancing drug use in baseball. Jeff Vedvick.
https://jeffvedvick.com/essays-1/2024/6/30/the-ethics-of-performance-enhancing-drug-use-in-baseball
Vedvick, J. (2025). Why your AI can’t fire you (yet): The irreducible core of human project
management. Jeff Vedvick. https://jeffvedvick.com/essays-1/2025/10/17/the-demise-of-the-project-manager-is-greatly-exaggerated-the-future-and-present-of-ai-and-project-management
Vedvick, J. (2025b). Please hold: Why your call center job (mostly) isn’t going anywhere. Jeff
Vedvick. https://jeffvedvick.com/essays-1/2025/11/14/-call-center-jobs-artificial-intelligence
Wachowski, L. & Wachowski, L. (Directors). (1999). The Matrix [Film]. Warner Bros.
WH. (n.d.). Blueprint for an AI bill of rights. The White House.
https://bidenwhitehouse.archives.gov/ostp/ai-bill-of-rights/
Wheeler, T. (2018). Who makes the rules in the new Gilded Age? Brookings.
https://www.brookings.edu/articles/who-makes-the-rules-in-the-new-gilded-age/
Wheeler, T. (2023). The three challenges of AI regulation. Brookings.
https://www.brookings.edu/articles/the-three-challenges-of-ai-regulation/
Wiener, S. (2025). SB-53 Artificial intelligence models: large developers. California Legislative
Information. https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202520260SB53
Wong, Q. & Christopher, N. (2025). Gov. Newsom signs AI safety bills, vetoes one after pushback
from tech industry. The Los Angeles Times. https://www.latimes.com/business/story/2025-10-13/gov-newsom-signs-ai-safety-bill
Wulf, A.J. & Seizov, O. (2022). “Please understand we cannot provide further information”:
evaluating content and transparency of GDPR-mandated AI disclosures. AI & Society, 39, 235-256. https://doi.org/10.1007/s00146-022-01424-z
Zorthian, J. (2023). OpenAI CEO Sam Altman asks congress to regulate AI. Time.