• Algorithmic Adjudication and Constitutional AI—The Promise of A Better AI Decision Making Future? – SMU Science and Technology Law Review

    ‘Algorithmic governance is when algorithms, often in the form of AI, make decisions, predict outcomes, and manage resources in various aspects of governance. This approach can be applied in areas like public administration, legal systems, policy-making, and urban planning. Algorithmic adjudication involves using AI to assist in or decide legal disputes. This often includes the…


  • The ethical implications of AI hype – AI and Ethics

    ‘Key to ensuring effective dialogue around artificial intelligence (AI), including its ethical and legal ramifications, is its accurate representation. However, as exemplified by the most recent wave of AI Hype, the communication of AI’s present (and future) capabilities are often largely exaggerated and overinflated.’ Link: https://link.springer.com/article/10.1007/s43681-024-00539-x


  • AI will have bigger impact on law than the internet, says thinktank – Legal Futures

    ‘Artificial intelligence (AI) will have a greater impact on legal services than the internet revolution, a roundtable sponsored by the Solicitors Regulation Authority (SRA) has predicted.’ Link: https://www.legalfutures.co.uk/latest-news/ai-will-have-bigger-impact-on-law-than-the-internet-says-thinktank


  • No god in the machine: the pitfalls of AI worship – The Guardian

    ‘The rise of artificial intelligence has sparked a panic about computers gaining power over humankind. But the real threat comes from falling for the hype.’ Link: https://www.theguardian.com/news/article/2024/aug/08/no-god-in-the-machine-the-pitfalls-of-ai-worship


  • Your duty: What you should know about AI – American Bar Association

    ‘Lawyers need a functional understanding of artificial intelligence technologies, said Daniel W. Linna Jr., director of Law and Technology Initiatives at Northwestern Pritzker School of Law, adding that one of the “big mistakes” they make is to only consider generative AI, when there are so many other computational tools to understand.’ Link: https://www.americanbar.org/news/abanews/aba-news-archives/2024/08/your-duty-what-to-know-about-ai/


  • The EU’s AI Act is now in effect. Here’s what you need to know – Quartz

    ‘The European Union’s law to regulate the development, use, and application of artificial intelligence is now in effect.’ Link: https://qz.com/european-union-ai-act-in-effect-eu-legislation-big-tech-1851610607


  • Managed by the algorithm: how AI is changing the way we work – Algorithm Watch

    ‘Automated decision-making systems control our work, whether in companies or via platforms that allocate jobs to independent contractors. Companies can use them to increase their efficiency, but such systems have a downside: They can also be used to surveil employees and often conceal the exploitation of workers and the environment.’ Link: https://algorithmwatch.org/en/ai-in-workplace-explained/


  • UK’s AI bill to focus on ChatGPT-style models – Financial Times

    ‘UK tech secretary Peter Kyle has reassured major technology companies that a long-awaited artificial intelligence bill will be narrowly focused on the most advanced models and will not become a sprawling “Christmas tree bill” to regulate the nascent industry.’ Link: https://www.ft.com/content/ce53d233-073e-4b95-8579-e80d960377a4


  • AI Has a Revolutionary Ability to Parse Details. What Does That Mean for Business? – Harvard Business Review

    ‘Humans have relied on generalizations forever as a mental shortcut — and a way of running a business efficiently. But just as advancements in AI are making it possible to move beyond a handful of customer personas to infinitely personalizable products and messaging, they are also revealing to us a broader world full of ever-changing…


  • Sustainable AI: a contradiction in terms? – Algorithm Watch

    ‘There is a wide range of potential applications for AI systems: They are supposed to make resource consumption more efficient, solve complex social problems such as the energy and mobility transition, create a more sustainable energy system, and facilitate research into new materials. AI is even seen as an essential tool for tackling the climate crisis.…


  • TechScape: Will OpenAI’s $5bn gamble on chatbots pay off? Only if you use them – The Guardian

    ‘The ChatGPT maker is betting big, while Google hopes its AI tools won’t replace workers, but help them to work better.’ Link: https://www.theguardian.com/technology/article/2024/jul/30/will-open-ais-5bn-gamble-on-chatbots-pay-off-only-if-you-use-them


  • Six Winning Strategies to Upskill Your Workforce for AI – National Law Review

    ‘Artificial intelligence (AI) has become a game-changer in the business world, helping to drive efficiencies, spark innovation and unlock new growth opportunities. According to PwC, AI could add a staggering $15.7 trillion to the global economy by 2030.’ Link: https://natlawreview.com/article/six-winning-strategies-upskill-your-workforce-ai


  • ABA issues first ethics guidance on a lawyer’s use of AI tools – American Bar Association

    ‘The American Bar Association Standing Committee on Ethics and Professional Responsibility released today its first formal opinion covering the growing use of generative artificial intelligence (GAI) in the practice of law, pointing out that model rules related to competency, informed consent, confidentiality and fees principally apply.’ LinkL https://www.americanbar.org/news/abanews/aba-news-archives/2024/07/aba-issues-first-ethics-guidance-ai-tools/


  • Consent in Crisis: The Rapid Decline of the AI Data Commons

    ‘General-purpose artificial intelligence (AI) systems are built on massive swathes of public web data, assembled into corpora such as C4, RefinedWeb, and Dolma. To our knowledge, we conduct the first, large-scale, longitudinal audit of the consent protocols for the web domains underlying AI training corpora. Our audit of 14,000 web domains provides an expansive view…


  • In search of verifiability: Explanations rarely enable complementary performance in AI-advised decision making – AI Magazine

    ‘The current literature on AI-advised decision making—involving explainable AI systems advising human decision makers—presents a series of inconclusive and confounding results. To synthesize these findings, we propose a simple theory that elucidates the frequent failure of AI explanations to engender appropriate reliance and complementary decision making performance. In contrast to other common desiderata, for example,…


  • Lights, Camera, Litigation: The Hidden Costs And Legal Minefield Of AI – Forbes

    ‘In the heart of Hollywood, a new star is rising—but it’s not the next A-list celebrity. Artificial intelligence is taking center stage, promising to revolutionize filmmaking while simultaneously creating a legal labyrinth that could cost unwary creators up to $150,000 per infringement. As AI tools become increasingly sophisticated, filmmakers find themselves walking a tightrope between…


  • Writing prompts: A quick guide for lawyers using generative AI – Future of Law

    ‘For law firms, in-house legal departments and lawyers from all backgrounds to make the most of generative AI, it is essential that they understand the art of crafting effective prompts.’ Link: https://www.lexisnexis.co.uk/blog/future-of-law/writing-prompts-a-quick-guide-for-lawyers-using-generative-ai


  • Red Teaming for GenAI Harms – Revealing the Risks and Rewards for Online Safety – Ofcom

    ‘As the new regulator for online safety, Ofcom is exploring how online services could employ safety measures to protect their users from harm posed by GenAI. One such safety intervention is red teaming, a type of evaluation method that seeks to find vulnerabilities in AI models. Put simply, this involves ‘attacking’ a model to see…


  • AI Act risks stalling innovation – Solicitors Journal

    ‘European tech companies express concerns over rushed AI legislation and its potential impact on innovation.’ Link: https://www.solicitorsjournal.com/sjarticle/ai-act-risks-stalling-innovation


  • Elon Musk’s X may be in breach of Australian privacy law over data harvesting for Grok AI – ABC

    ‘Australia’s privacy watchdog says social media platform X (formerly Twitter) may be in breach of Australian privacy law after it emerged users were automatically opted in to having their posts used to build artificial intelligence (AI) systems.’ Link: https://www.abc.net.au/news/science/2024-07-31/elon-musk-x-breach-privacy-law-data-harvest-grok-ai/104054400


  • Microsoft calls for new laws on AI-generated deepfakes – Fast Company

    ‘Microsoft is calling on Congress to pass new laws that make it illegal to use AI-generated voices and images to defraud people, especially seniors and children.’ Link: https://www.fastcompany.com/91165063/microsoft-white-paper-new-laws-ai-deepfakes-frauid


  • Racism and AI: “Bias from the past leads to bias in the future” – OHCHR

    ‘“Recent developments in generative artificial intelligence and the burgeoning application of artificial intelligence continue to raise serious human rights issues, including concerns about racial discrimination,” said Ashwini K.P., UN Special Rapporteur on contemporary forms of racism, racial discrimination, xenophobia, and related intolerance.’ Link: https://www.ohchr.org/en/stories/2024/07/racism-and-ai-bias-past-leads-bias-future


  • Navigating the United States Legislative Landscape on Voice Privacy: Existing Laws, Proposed Bills, Protection for Children, and Synthetic Data for AI – Center for Robust Speech Systems (CRSS)

    ‘Privacy is a hot topic for policymakers across the globe, including the United States. Evolving advances in AI and emerging concerns about the misuse of personal data have pushed policymakers to draft legislation on trustworthy AI and privacy protection for its citizens. This paper presents the state of the privacy legislation at the U.S. Congress…


  • XAI is in trouble – AI Magazine

    ‘Researchers focusing on how artificial intelligence (AI) methods explain their decisions often discuss controversies and limitations. Some even assert that most publications offer little to no valuable contributions. In this article, we substantiate the claim that explainable AI (XAI) is in trouble by describing and illustrating four problems: the disagreements on the scope of XAI,…


  • Towards Trustworthy AI: A Review of Ethical and Robust Large Language Models – Proceedings of the IEEE

    ‘The rapid progress in Large Language Models (LLMs) could transform many fields, but their fast development creates significant challenges for oversight, ethical creation, and building user trust. This comprehensive review looks at key trust issues in LLMs, such as unintended harms, lack of transparency, vulnerability to attacks, alignment with human values, and environmental impact. Many…


  • Awareness and Adoption of AI Technologies in the Libraries of Karnataka – Dr Felcy D’Souza

    ‘This study aims to determine the awareness and adoption of Artificial Intelligence (AI) technologies in the respondent libraries of Karnataka based on demographic variables such as gender, age, academic status, and professional experience. This study employed a survey research method to evaluate the awareness and adoption of AI technologies among the respondent library professionals in…


  • Deepfake Defences: Mitigating the Harms of Deceptive Deepfakes – Ofcom

    ‘Deepfakes are audio-visual content that has been generated or manipulated using AI, and that misrepresents someone or something. New generative AI tools allow users to create wholly new content that can be life-like and make it significantly easier for anyone with modest technical skill to create deepfakes.’ Link: https://www.ofcom.org.uk/online-safety/illegal-and-harmful-content/deepfake-defences


  • AI in the public sector: white heat or hot air? – Ada Lovelace Institute

    ‘The UK’s new administration is warming up to the ‘white heat’ of technology. During the election campaign, Labour politicians announced plans for using AI1 to help with truancy, to support jobseekers and to analyse hospital scans. Peter Kyle, the incoming Secretary of State for Science, Innovation and Technology, has spoken warmly about the power of technology to save time and make…


  • Understanding XAI Through the Philosopher’s Lens: A Historical Perspective

    ‘Despite explainable AI (XAI) has recently become a hot topic and several different approaches have been developed, there is still a widespread belief that it lacks a convincing unifying foundation. On the other hand, over the past centuries, the very concept of explanation has been the subject of extensive philosophical analysis in an attempt to…


  • Academic backlash as publisher lets Microsoft train AI on papers – Times Higher Education (£)

    ‘Researchers claim that Taylor & Francis kept details of deal quiet, but company insists that citation and limits on verbatim quoting will be sacrosanct.’ Link: https://www.timeshighereducation.com/news/academic-backlash-publisher-lets-microsoft-train-ai-papers


  • Addressing the elephant in the room: engaging students in ChatGPT conversations on assessments – Journal of Teaching in Travel and Tourism

    ‘The development of technology presents opportunities and challenges for the education system. This study investigates the integration of ChatGPT into higher education, focusing on tourism studies. Using a duoethnography approach, the study explores the experiences of two tourism educators who incorporate ChatGPT into their pedagogy and assessment methods. Results reveal that acknowledging students’ mixed responses…


  • Implementation of the EU AI act calls for interdisciplinary governance – AI Magazine

    ‘The European Union Parliament passed the EU AI Act in 2024, which is an important milestone towards the world’s first comprehensive AI law to formally take effect. Although this is a significant achievement, the real work begins with putting these rules into action, a journey filled with challenges and opportunities. This perspective article reviews recent…


  • Large Language Models for Judicial Entity Extraction: A Comparative Study – National University of Singapore

    ‘Domain-specific Entity Recognition holds significant importance in legal contexts, serving as a fundamental task that supports various applications such as question-answering systems, text summarization, machine translation, sentiment analysis, and information retrieval specifically within case law documents. Recent advancements have highlighted the efficacy of Large Language Models in natural language processing tasks, demonstrating their capability to…


  • International AI competition statement signals joint commitment by antitrust enforcers – OUT-LAW.com

    ‘Competition authorities globally are coalescing in their efforts to understand, monitor, and proactively address competition issues that may arise from rapidly developing artificial intelligence (AI) technology.’ Link: https://www.pinsentmasons.com/out-law/news/ai-competition-statement-joint-commitment-antitrust-enforcers


  • Making the Justice Leap – AALL Spectrum

    ‘Using generative AI to bridge the Literacy, Equity, Access and Privilege gaps for self-represented litigants.’ Link: https://tinyurl.com/aall-spectrum


  • The Unexpected Robustness of American AI Regulation – KU Leuven

    ‘Artificial Intelligence (AI) is increasingly subject to regulation in various jurisdictions, including the United States of America (U.S.). This blogpost provides an overview of the current regulatory landscape in the U.S. and examines key initiatives at both federal and state level. By exploring both the potential and shortcomings of these efforts, this blogpost aims to…


  • AI’s Data Appetite Is Huge. That’s a Problem for Privacy Laws – Bloomberg Law

    ‘Generative AI’s voracious consumption of data is starting to run up against strict rules protecting individuals’ rights to data privacy in Europe and around the world.’ Link: https://news.bloomberglaw.com/artificial-intelligence/ais-data-appetite-is-huge-thats-a-problem-for-privacy-laws


  • Gen AI Is Coming for Remote Workers First – Harvard Business Review

    ‘Automation has historically impacted blue-collar jobs first, whereas white-collar jobs benefited. The wave of remote work brought on by the Covid-19 pandemic further empowered white-collar workers with more autonomy through remote work. However, generative AI is changing this narrative. Remote workers are now more susceptible to automation due to their tasks being digital and thus…


  • 20 essential AI terms all UK lawyers need to know – Future of Law

    ‘As artificial intelligence (AI) continues to advance and become more integrated into various industries, including the legal field, it’s crucial for lawyers in the UK to familiarise themselves with the relevant terminology. Understanding these terms will not only help you communicate more effectively with clients, colleagues, and experts but also enable you to navigate the…


  • The AI community building the future? A quantitative analysis of development activity on Hugging Face Hub – Journal of Computational Social Science

    ‘Open model developers have emerged as key actors in the political economy of artificial intelligence (AI), but we still have a limited understanding of collaborative practices in the open AI ecosystem. This paper responds to this gap with a three-part quantitative analysis of development activity on the Hugging Face (HF) Hub, a popular platform for…


  • AI’s Real Hallucination Problem – The Atlantic

    ‘Two years ago, OpenAI released the public beta of DALL-E 2, an image-generation tool that immediately signified that we’d entered a new technological era. Trained off a huge body of data, DALL-E 2 produced unsettlingly good, delightful, and frequently unexpected outputs; my Twitter feed filled up with images derived from prompts such as close-up photo of…


  • The AI Regulatory Regimes of the EU and the UK and how best to comply – Kingsley Napley Corporate and Commercial Law Blog

    ‘The European Union’s AI Act and the UK’s AI Bill in the House of Lords represent two significant legislative efforts aimed at regulating artificial intelligence (AI) within their respective jurisdictions. Both frameworks seek to ensure ethical AI development and usage while fostering innovation, yet they diverge significantly in their regulatory approaches and implications.’ Link: https://www.kingsleynapley.co.uk/insights/blogs/corporate-and-commercial-law-blog/the-ai-regulatory-regimes-of-the-eu-and-the-uk-and-how-best-to-comply


  • Can AI Help Your Company Innovate? It Depends – Harvard Business Review

    ‘Companies need new ways to innovate quickly, cheaply, and productively. Many, quite reasonably, wonder how deploying AI might help. To investigate, we researched how companies are using AI for innovation and found that tools are just tools — success depends on how organizations use these new tools now at their disposal. To investigate what kinds…


  • Google’s wrong answer to the threat of AI – stop indexing content – The Guardian

    ‘The search engine’s response to ChatGPT and its ilk is to take a highly partial approach to what it considers worthy of attention.’ Link: https://www.theguardian.com/commentisfree/article/2024/jul/20/googles-wrong-answer-to-the-threat-of-ai-stop-indexing-content


  • OpenAI tests new search engine called SearchGPT amid AI arms race – The Guardian

    ‘OpenAI is testing a new search engine that uses generative artificial intelligence to produce results, raising the prospect of a significant challenge to Google’s dominance of the online search market.’ Link: https://www.theguardian.com/business/article/2024/jul/25/openai-search-engine-searchgpt


  • Reskilling in the Age of AI – Harvard Business Review

    ‘In the coming decades, as the pace of technological change continues to increase, millions of workers may need to be not just upskilled but reskilled—a profoundly complex societal challenge that will sometimes require workers to both acquire new skills and change occupations entirely. Companies have a critical role to play in addressing this challenge, but to…


  • Not yet panicking about AI? You should be – there’s little time left to rein it in – The Guardian

    ‘Only a handful of people grasp the magnitude of the changes that are about to hit us. They’re exciting – and terrifying.’ Link: https://www.theguardian.com/commentisfree/article/2024/jul/22/artificial-intelligence-panic-time-change


  • Meta Releases Largest Open-Source AI Model Yet – Lifehacker

    ‘After Meta said it would keep its next multimodal AI out of Europe, the company is now releasing Llama 3.1, an open-source AI model with performance rivaling ChatGPT and Google Gemini. The model now has 405 billion parameters, achieved with training on 16,000 enterprise-level Nvidia GPUs. What’s that mean for you, aside from some existential dread on…


  • Authorship in artificial intelligence-generated works: Exploring originality in text prompts and artificial intelligence outputs through philosophical foundations of copyright and collage protection – Journal of World Intellectual Property

    ‘The advent of artificial intelligence (AI) and its generative capabilities have propelled innovation across various industries, yet they have also sparked intricate legal debates, particularly in the realm of copyright law. Generative AI systems, capable of producing original content based on user-provided input or prompts, have introduced novel challenges regarding ownership and authorship of AI-generated…


  • Assessing law students in a GenAI world to create knowledgeable future lawyers – International Journal of the Legal Profession

    ‘Assessing law students has always been a challenging task, but the introduction of Generative Artificial Intelligence (GenAI), such as ChatGPT, compounds the problems already caused by increased student numbers, contract cheating and budget cuts in universities. As GenAI rapidly develops, legal educators must find ways to accommodate, and even incorporate, GenAI into their curricula and assessments so…