Agustín Negre, GLTH Latam Chapter Leader & Executive Manager at Alfaro Abogados
Generative Artificial Intelligence (AI) is radically altering our workplace, particularly with the accessibility of Large Language Models (LLMs) like OpenAI's ChatGPT. These models, once exclusive to specialists, are now widely available.
In the legal domain, AI enhances management tools and judicial processes, indicating how technological accessibility can foster significant advancements.
In March 2024, a study titled "Implementing Generative AI in Legal Firms and Legal Departments" was published by the AI Laboratory (UBA IALAB) at the University of Buenos Aires. It highlighted the need for an integrated approach to manage AI tools effectively, showing that proper use can significantly save time and improve efficiency.
Need of regulation
The immediate success of ChatGPT, with over a million users within a month of its launch, underscored its potential impact and highlighted the need for regulation. Countries like Italy initially blocked ChatGPT's use, later reinstating it with promises of increased transparency and data protection. Other regions, including the EEC, Canada, and the United States, are also investigating the tool’s scope.
The OECD and the European Commission have outlined fundamental AI design principles focusing on human rights, transparency, accountability, security, non-discrimination, and societal benefit. The Ibero-American Data Protection Network (RIPDP) warns of the risks associated with using AI services like those developed by OpenAI, L.L.C., including legal bases for data processing, user information, data transfers without consent, age control measures, and data security.
The Need for AI Usage Policies
Warnings from international entities, along with actions by companies like Amazon, Bank of America, and Verizon banning the use of ChatGPT by employees, underscore the need for clear corporate policies regulating AI tool usage. Large organizations may develop their own generative AI systems, but smaller ones might rely on standardized solutions due to budget constraints.
To prevent misuse by employees or business partners, it's crucial for organizations to establish a comprehensive AI Usage Policy. This policy should outline the permitted and prohibited uses of AI, set data management standards respecting privacy and data protection laws, and include processes for handling sensitive personal data and erroneous data outputs.
The policy should also define criteria for selecting AI providers and technologies that comply with ethical and legal standards, incorporate these standards into contracts, and provide ongoing training and reviews to ensure policy compliance.
An AI misuse reporting channel should be accessible to employees and external users, emphasizing the organization’s commitment to ethical AI use. Continuous education on AI ethics, periodic policy reviews, and open communication channels are essential to adapt to new technological and legislative developments.
In conclusion, implementing robust AI usage policies is crucial not only to protect user interests and privacy but also to maximize the benefits of advanced technologies. By doing so, organizations can ensure that AI adoption positively contributes to society and strengthens trust in emerging technologies.
| SOURCES
GLTH Latam Chapter Leader & Executive Manager at Alfaro Abogados
www.alfarolaw.com
Comments