AI has reshaped our world, bringing extraordinary benefits and raising complex legal and ethical questions. In this op-ed, Susan L. Karamanian, Dean of HBKU College of Law, delves into the intersection of AI, education, and the law, highlighting the urgent need for interdisciplinary collaboration and critical thinking to navigate AI’s challenges and opportunities effectively.
Each day we learn more about how AI influences our lives. Systems fueled by AI have profound benefits. They have enabled medical researchers and doctors to better understand causes of health problems. They have improved the ability to offer reliable predictions, whether as to traffic flow and thereby easing congestion, or the weather, so that farmers can better time planting and harvesting.
Yet the very technology that helps us answer questions and gather information through the click of a button raises critical legal and ethical issues. Three topics have kept me busy over the past year as they are intertwined with what we teach in law school. First, fundamental individual rights, whether as to protection of personal data, non-discrimination as to collection and use of data, and personal autonomy, are at stake, particularly when personal information is used to draw inferences about human behavior. Second, we are facing a lack of transparency as creators rely on intellectual property (IP) laws to keep details about the algorithms secret. Are we able to replicate the algorithm’s functions? If not, what does that mean as to our understanding of these systems? Third, a host of other IP issues abound, such as the protection of the copyright of the information used by the algorithm, as evidenced in the NY Times v. Microsoft case. Another pressing IP issue is whether the creator of the algorithm holds the copyright to AI generated content.
A creator or user of AI must be mindful of misuse or manipulation of the tools. AI’s influence over an individual’s perception of reality cannot be underestimated.
Further, even as to legal conduct, a creator or user of AI must be mindful of misuse or manipulation of the tools. AI’s influence over an individual’s perception of reality cannot be underestimated. Each of us has likely fallen prey to deep fakes and when it happens, we write it off as a lapse in judgment with a promise to pay better attention the next time. AI’s daily influence, however, is more subtle, as algorithms are used to flood our social media accounts with advertisements and news deemed to be aligned with our preferences.
Complicating the existing landscape is the lack of a legal regime that deals with the many issues arising due to AI and the cross-border flow of data. The European Union has led the way with the 2024 EU AI Act, which followed the high-level expert group on AI of the European Commission, which issued its ethics guidelines in 2019. The 2024 EU AI Act sets out obligations of AI developers and deployers as to certain AI uses. It applies to companies in the EU as well as those outside of the EU to the extent they provide services within the EU. From a governance perspective, I see the EU setting the standard, like it did with the General Data Protection Regulation (GDPR) in the privacy space, but we still have a substantial regulatory gap as many countries, including the United States, have not enacted comprehensive legislation.
Educators have been told that AI is to be embraced in the classroom with certain conditions attached; AI has tremendous value, and students will gravitate towards it, and even more so if we forbid its use. We are told that education now requires the development of critical analytical functions, particularly as to the “prompts” students are to use to gain access to information. Quite frankly, whether it be in law, engineering, or history, students traditionally have been pushed to reason, to identify key issues, ask the right questions, and bring creative insights into understanding or problem-solving. So, they have been devising “prompts” and applying them for years.
We must instill in students the ability to distinguish the absurd from the logical, the intensely private from the public, and the scheming from the patently objective.
The notion that AI eases education, making it akin to having students on an “autopilot” of some sort, understates the magnitude of the challenge that awaits educators. The AI revolution requires more as to reasoning. We must instill in students the ability to distinguish the absurd from the logical, the intensely private from the public, and the scheming from the patently objective. If anything, today’s students should be immersed in logic so they can make sense of AI’s guidance. They should be steeped in philosophy so that the necessary moral perspective does not take a backseat.
Second, students not versed in computer science or data analytics should think about expanding their knowledge. A few weeks ago, Georgetown University in Qatar (GU-Q) hosted a two-day conference titled “AI Uprising: Opportunities and Challenges for the Future of Work and Its Impact on the Environment” under the auspices of its Hiwaraat Series. I had the unique opportunity to interview Her Excellency, Ms. Deema Al Yahya, the Secretary-General of the Digital Cooperation Organization, an inter-governmental organization with its headquarters in Riyadh that promotes inclusivity as to access to the digital economy. Her Excellency observed that many working in the policy and legal space lack the technical training to appreciate how AI works. Her words were music to my ears as since inception, HBKU College of Law has offered a graduate degree in law, recognizing that a lawyer should have at least one first degree, and indeed some of our law students have previous degrees in engineering and the sciences.
The AI revolution is a call for secondary and tertiary institutions to shift away from the tendency to specialize, of putting students into STEM and non-STEM boxes with never the two meeting.
So, from my perspective, particularly as a legal educator, the AI revolution is a call for secondary and tertiary institutions to shift away from the tendency to specialize, of putting students into STEM and non-STEM boxes with never the two meeting. The computer programmer can no longer remain ignorant of the full consequences of what he or she is asking the program to do and with what information. Policymakers and lawyers cannot remain ignorant of the full consequences of new technology. In short, it’s an exciting time to think about how we can revamp education to ensure these multiple objectives, key to cherished social values, can be realized.