-
UK proposes letting tech firms use copyrighted work to train AI – The Guardian
‘Campaigners for the protection of the rights of creatives have criticised a UK government proposal to let artificial intelligence companies train their algorithms on their works under a new copyright exemption.’ Link: https://www.theguardian.com/technology/2024/dec/17/uk-proposes-letting-tech-firms-use-copyrighted-work-to-train-ai
-
Generative AI and Climate Change Are on a Collision Course – Wired
‘From energy to resources, data centers have grown too greedy.’ Link:https://www.wired.com/story/true-cost-generative-ai-data-centers-energy/
-
Ireland’s national AI strategy refresh: the seven strands – OUT-LAW.com
‘Ireland’s refreshed national AI strategy should help the country build on its recent recognition as one of the world’s top performing nations in AI, relative to its size.’ Link: https://www.pinsentmasons.com/out-law/analysis/ireland-national-ai-strategy-refresh-the-seven-strands
-
UK arts and media reject plan to let AI firms use copyrighted material – The Guardian
‘Writers, publishers, musicians, photographers, movie producers and newspapers have rejected the Labour government’s plan to create a copyright exemption to help artificial intelligence companies train their algorithms.’ Link: https://www.theguardian.com/technology/2024/dec/19/uk-arts-and-media-reject-plan-to-let-ai-firms-use-copyrighted-material
-
Copyright and Artificial Intelligence – GOV.UK
‘Both our creative industries and our AI sector are UK strengths. They are vital to our national mission to grow the economy. This consultation sets out our plan to deliver a copyright and AI framework that rewards human creativity, incentivises innovation and provides the legal certainty required for long-term growth in both sectors.’ Link: https://www.gov.uk/government/consultations/copyright-and-artificial-intelligence/copyright-and-artificial-intelligence
-
Skills and competencies of academic librarians to use information technology tools in the digital era: A systematic literature review – Information Development
‘The systematic review analyzed 27 works of literature related to the skills and competencies needed by academic librarians to use information technology (IT) tools effectively in the digital age. The review is carried out following the recommended Preferred Reporting Items for Systematic Review and Meta-Analysis (PRISMA) guidelines. Relevant literature was extracted from eight major academic…
-
Assessing the Effectiveness of Academic Integrity Institutional Policies: How Can Honor Code and Severe Punishments Deter Students’ Cheating—Moderating Approach? – Sage Open
‘This paper examined the role of honor codes and severity of punishment on the students’ perception of cheating seriousness in order to assess the effectiveness of institutional policies on preventing the academic misconduct. In order to further put into perspective the obtained results, two moderating factors were included in the empirical analysis—students’ understanding and support…
-
Considering the Impact of AI on the Professional Status of Teaching – The Clearing House: A Journal of Educational Strategies, Issues and Ideas
‘The purpose of this perspective essay is not to dissect the merits or deficits of AI in practice in classrooms, nor is it intended to approach the topic from a technical standpoint. Rather, this manuscript begins from a place of acceptance, recognizing that AI in education is already here and the adaptive and evolutionary nature…
-
A model of ‘rough justice’ for internet intermediaries from the perspective of EU copyright law – Computer Law & Security Review
‘Internet intermediaries’ content moderation raises two major problems. The first relates to the accuracy of the moderation practices, which is an issue on whether the intermediaries over-enforce or under-enforce. The second problem concerns the inherent privatization of justice that results when enforcement of rights is left to a private party. The purpose of the article…
-
AI copyright regime steers away from requiring licences in all cases – OUT-LAW.com
‘Prohibiting AI developers from training their AI models with copyrighted content without a licence in all cases would likely harm the UK’s global competitiveness in AI development, the UK government has said.’ Link: https://www.pinsentmasons.com/out-law/news/ai-copyright-regime-steers-requiring-licences-all-cases
-
Works in Progress Webinar: Lessons learned from implementing an AI reference chatbot at the University of Calgary Library – OCLC Research
‘In this webinar, University of Calgary Library staff reflect on lessons learned from the implementation of the Library’s AI reference chatbot.’ Link: https://www.oclc.org/research/events/2024/implementing-an-ai-reference-chatbot.html
-
Should You Write with Gen AI – Harvard Business Review
‘The promise of higher productivity makes using ChatGPT and other gen AI tools to help with daily business writing tempting. But there are risks: loss of your unique voice, inadvertent errors, and ethical pitfalls, all of which can negatively impact your credibility and relationships. Using AI to help you rather than replace you as you…
-
AI won’t just speed up the legal system — it will revolutionise it – The Times (£)
‘Artificial intelligence is going to help citizens to assert their legal rights, and the opportunity for lawyers will be in creating this new way of working.’ Link: https://www.thetimes.com/uk/law/article/ai-wont-just-speed-up-the-legal-system-it-will-revolutionise-it-lc308nkmq
-
The Impact and Value of AI for IP and the Courts – a speech by Lord Justice Birss – Courts and Tribunals Judiciary
‘The Rt Hon. Lord Justice Birss, Deputy Head of Civil Justice, delivered a speech at the Life Sciences Patent Network European Conference in London on 3 December 2024.’ Link: https://www.judiciary.uk/the-impact-and-value-of-ai-for-ip-and-the-courts-a-speech-by-lord-justice-birss/
-
AI Webinar: Is copyright a barrier to AI? – CILIP Knowledge & Information Management Group
‘As a Knowledge and Information Management professional, have you ever assessed AI tools in the context of potential copyright infringement? How does current copyright and intellectual property legislation address AI issues? How do you protect third-party copyrighted works in your AI environment? Are existing copyright exceptions sufficient to implement text and data mining or machine…
-
Generative AI Is My Research and Writing Partner. Should I Disclose It? – Wired
‘“If I use an AI tool for research or to help me create something, should I cite it in my completed work as a source? How do you properly give attribution to AI tools when you use them?”’ Link: https://www.wired.com/story/prompt-disclose-at-in-creative-work-teach-kids-about-chatbots/
-
Machine Unlearning Doesn’t Do What You Think: Lessons for Generative AI Policy, Research, and Practice – Cooper et al
‘We articulate fundamental mismatches between technical methods for machine unlearning in Generative AI, and documented aspirations for broader impact that these methods could have for law and policy. These aspirations are both numerous and varied, motivated by issues that pertain to privacy, copyright, safety, and more. For example, unlearning is often invoked as a solution…
-
Exploring Memorization and Copyright Violation in Frontier LLMs: A Study of the New York Times v. OpenAI 2023 Lawsuit – Freeman, Rippe, Debenedetti & Andriushchenko
‘Copyright infringement in frontier LLMs has received much attention recently due to the New York Times v. OpenAI lawsuit, filed in December 2023. The New York Times claims that GPT-4 has infringed its copyrights by reproducing articles for use in LLM training and by memorizing the inputs, thereby publicly displaying them in LLM outputs. Our…
-
Gender bias in visual generative artificial intelligence systems and the socialization of AI – AI and Society
‘Substantial research over the last ten years has indicated that many generative artificial intelligence systems (“GAI”) have the potential to produce biased results, particularly with respect to gender. This potential for bias has grown progressively more important in recent years as GAI has become increasingly integrated in multiple critical sectors, such as healthcare, consumer lending,…
-
Student use of generative AI as a composing process supplement: Concerns for intellectual property and academic honesty – Computers and Composition
‘This article discusses the nuanced challenges of using Generative Artificial Intelligence in multimodal compositions while maintaining an ethical adherence to ideas of academic honesty and intellectual property. Through examining hypothetical scenarios, we can see that multimodality complicates the concept of “fair use” in academic contexts, since image or audio generation via AI functions differently than…
-
Harvard’s Library Innovation Lab launches Institutional Data Initiative – Harvard Law Today
‘The new program aims to make public domain materials housed at Harvard Law School Library and other knowledge institutions available to train AI.’ Link: https://hls.harvard.edu/today/harvards-library-innovation-lab-launches-initiative-to-use-public-domain-data-to-train-artificial-intelligence/
-
Reputation Management in the ChatGPT Era – Edwards & Binns
‘Generative AI systems often generate outputs about real people, even when not explicitly prompted to do so. This can lead to significant reputational and privacy harms, especially when sensitive, misleading, and outright false. This paper considers what legal tools currently exist to protect such individuals, with a particular focus on defamation and data protection law.…
-
Who Is Responsible When AI Breaks the Law? – Yale Insights
‘If an AI is a black box, who is liable for its actions? The owner of the platform? The end user? Its original creator? Former Secretary of Homeland Security Michael Chertoff and Miriam Vogel, president and CEO of EqualAI, survey how AI both fits in and breaks existing legal frameworks. They argue that leaders need…
-
AI and the law – Thompson
‘I argue that generative AI will have an uneven effect on the evolution of the law. To do so, I consider generative AI as a labor-augmenting technology that reduces the cost of both writing more complete contracts and litigating in court. The contracting effect reduces the demand for court services by making contracts more complete.…
-
ChatGPT Potential for Improving Library Services – Proceedings of the 2nd International Conference on Culture and Sustainable Development
‘Artificial intelligence (AI) technology is now rapidly developing in various fields, such as healthcare, transportation, agriculture, and education. With these significant advancements, AI is becoming increasingly important in helping humans perform various complex tasks. One form of AI that is currently gaining public attention is AI-based chatbots such as ChatGPT, which has the ability to…
-
The Mirage of Artificial Intelligence Terms of Use Restrictions – Henderson & Lemley
‘Artificial intelligence (AI) model creators commonly attach restrictive terms of use to both their models and their outputs. These terms typically prohibit activities ranging from creating competing AI models to spreading disinformation. Often taken at face value, these terms are positioned by companies as key enforceable tools for preventing misuse, particularly in policy dialogs. But…
-
Personalism in Generative AI Deployment: Deciding Ethically When Human Creative Expression is at Stake – Humanistic Management Journal
‘Generative Artificial Intelligence (GAI) has the potential to automate, integrate or augment human creativity. Current literature reveals that organizations adopting such disruptive technology can both boost or hinder human creativity. Such ambiguity poses an ethical dilemma for decision-makers: while managers are pressured to adopt GAI quickly for optimization, holding on to their economic responsibilities, they…
-
Teaching and AI in the postdigital age: Learning from teachers’ perspectives – Teaching and Teacher Education
‘This interview-based study aimed to understand how teachers make sense of their work and themselves in relation to artificial intelligence (AI) and other digital technologies, and was conceived as a means of learning with and from teachers. Navigating recent AI developments raised questions about thinking, creativity, production, and the meaning and value of humanity, along…
-
Addressing the regulatory gap: moving towards an EU AI audit ecosystem beyond the AI Act by including civil society – AI and Ethics
‘The European legislature has proposed the Digital Services Act (DSA) and Artificial Intelligence Act (AIA) to regulate platforms and Artificial Intelligence (AI) products. We review to what extent third-party audits are part of both laws and how is access to information on models and the data provided. By considering the value of third-party audits and…
-
The silence of the LLMs: Cross-lingual analysis of guardrail-related political bias and false information prevalence in ChatGPT, Google Bard (Gemini), and Bing Chat – Telematics and Infomatics
‘This article presents a comparative analysis of political bias in the outputs of three Large Language Model (LLM)-based chatbots – ChatGPT (GPT3.5, GPT4, GPT4o), Bing Chat, and Bard/Gemini – in response to political queries concerning the authoritarian regime in Russia. We investigate whether safeguards implemented in these chatbots contribute to the censorship of information that…
-
Creative data justice: a decolonial and indigenous framework to assess creativity and artificial intelligence – Information, Communication & Society
‘In the last decade, the Global South has emerged as a significant player in the data economy due to their majority user base, and studying its role is crucial to comprehend the future of AI. As societies grapple with the implications of AI on creative life, there is an opportunity to reevaluate the creative contributions…
-
The digital fingerprint of learner behavior: Empirical evidence for individuality in learning using deep learning – Computers and Education: Artificial Intelligence
‘Personalized learning builds upon the fundamental assumption of uniqueness in learning behavior, often taken for granted. Quite surprisingly, however, the literature provides little to no empirical evidence backing the existence of individual learning behaviors. Driven by curiosity, we challenge this axiom. Our operationalization of a unique learning behavior draws an analogy to a fingerprint –…
-
Understanding local government responsible AI strategy: An international municipal policy document analysis – Cities
‘The burgeoning capabilities of artificial intelligence (AI) have prompted numerous local governments worldwide to consider its integration into their operations. Nevertheless, instances of notable AI failures have heightened ethical concerns, emphasising the imperative for local governments to approach the adoption of AI technologies in a responsible manner. While local government AI guidelines endeavour to incorporate…
-
Speech by the Master of the Rolls: Are rights sufficiently human in the age of the machine? – Courts and Tribunals Judiciary
Sir Geoffrey Vos, Master of the Rolls and Head of Civil Justice in England and Wales. Blackstone Lecture, Pembroke College, Oxford. Link: https://www.judiciary.uk/speech-by-the-master-of-the-rolls-are-rights-sufficiently-human-in-the-age-of-the-machine/
-
The grass is not always greener: Teacher vs. GPT-assisted written corrective feedback – System
‘Written Corrective Feedback (WCF) is a crucial pedagogical practice where teachers annotate student writing to correct errors and improve language skills, albeit one that is time-consuming and laborious for large classes or under time constraints. However, the advent of advanced generative artificial intelligence and large language models, specifically ChatGPT, has introduced new possibilities for automating…
-
We need to start wrestling with the ethics of AI agents – MIT Technology Review
‘AI could soon not only mimic our personality, but go out and act on our behalf. There are some things we need to sort out before then.’ Link: https://www.technologyreview.com/2024/11/26/1107309/we-need-to-start-wrestling-with-the-ethics-of-ai-agents/
-
UK government failing to list use of AI on mandatory register – The Guardian
‘Not a single Whitehall department has registered the use of artificial intelligence systems since the government said it would become mandatory, prompting warnings that the public sector is “flying blind” about the deployment of algorithmic technology affecting millions of lives.’ Link: https://www.theguardian.com/technology/2024/nov/28/uk-government-failing-to-list-use-of-ai-on-mandatory-register
-
Yes, That Viral LinkedIn Post You Read Was Probably AI-Generated – Wired
‘AI-generated writing is now all over the internet. The introduction of automated prose can sometimes change a website’s character, like when once beloved publications get purchased and overhauled into AI content mills. Other times, however, it’s harder to argue that AI really changed anything. For example, look at LinkedIn.’ Link: https://www.wired.com/story/linkedin-ai-generated-influencers/
-
Towards Automatic Classification of Learner-Centred Feedback – Computers and Education: Artificial Intelligence
‘In higher education, delivering effective feedback is pivotal for enhancing student learning but remains challenging due to the scale and diversity of student populations. Learner-centered feedback, a robust approach to effective feedback that tailors to individual student needs, encompasses three key dimensions—Future Impact, Sensemaking, and Agency, which collectively include eight specific components, thereby enhancing its relevance and…
-
Microsoft Is Denying That Office 365 Trains Its AI – Lifehacker
‘Following concerns that erupted on social media and its own support forums over the past few weeks, Microsoft wants to set the record straight: the company does not use Microsoft 365 (formerly Microsoft Office) apps to train its AI models, Copilot or otherwise.’ Link: https://lifehacker.com/tech/microsoft-rumors-office-365-ai
-
ChatGPT, can you solve the content moderation dilemma? – International Journal of Law and Information Technology
‘This article conducts a qualitative test of the potential use of large language models (LLMs) for online content moderation. It identifies human rights challenges arising from the use of LLMs for that purpose. Different companies (and members of the technical community) have tested LLMs in this context, but such examinations have not yet been centred…
-
Employer as an AI System Operator and Tortious Liability for Damage Caused by AI Systems: European and US Perspectives – The Chinese Journal of Comparative Law
‘The article examines if the standard of protecting parties injured by artificial intelligence (AI) systems used by professional operators is high in the European Union (EU) as compared to the USA—that is, whether the liability model of an operator, as applicable in the EU, ensures that injured parties have effective protection. For the purposes of…
-
Navigating uncertainty: Exploring consumer acceptance of artificial intelligence under self-threats and high-stakes decisions – Technology in Society
‘In an era of transformation fueled by Artificial Intelligence (AI), human resistance to adopt this powerful technology has emerged as one of its most critical barriers. In a series of four studies involving almost 4,000 consumers, this research explores factors that contribute to consumer reluctance toward AI through theories related to algorithm aversion, decision-making under…
-
The legal battle against explicit AI deepfakes – Financial Times (£)
‘It is easier than ever to forge graphic video and images. But campaigners hope that new laws could offer a template for controlling artificial intelligence.’ Link: https://www.ft.com/content/e2fa34b2-6987-494d-a81a-1bdb6693671f
-
Online Safety Act duties cover gen-AI and chatbots, Ofcom confirms – OUT-LAW.com
‘Online service providers have been given a “valuable reminder” that content generated by AI will fall in scope of the UK’s Online Safety Act’s requirements in the same way content created by human users does, an expert in technology regulation has said.’ Link: https://www.pinsentmasons.com/out-law/news/online-safety-act-duties-cover-gen-ai-and-chatbots
-
Guidance for using the AI Management Essentials tool – Department for Science, Innovation & Technology
‘AI Management Essentials (AIME) is a self-assessment tool designed to help businesses establish robust management practices for the development and use of AI systems. The tool is not designed to evaluate AI products or services themselves, but rather to evaluate the organisational processes that are in place to enable the responsible development and use of…
-
Law professor gives Lexis+ AI a failing grade – Canadian Bar Association
‘Artificial intelligence (AI) is rapidly being deployed in many sectors of society. As with any new technology, we must understand its capabilities and limitations, particularly given the high stakes of using AI in the legal context where professional obligations apply and clients’ vital interests are on the line.’ Link: https://www.nationalmagazine.ca/en-ca/articles/law/opinion/2024/law-professor-gives-lexis-ai-a-failing-grade
-
As Lawyers and Lawmakers Tackle AI, the 1990s Loom Large – Los Angeles Times
‘AI innovation mirrors the early “Wild West” days of the internet – is regulation soon to come?’ Link: https://www.latimes.com/b2b/business-of-law-2024-trends-updates-visionaries-and-the-in-house-counsel-awards-recap/story/2024-11-17/as-lawyers-and-lawmakers-tackle-ai-the-1990s-loom-large
-
The Official ChatGPT App Is Now Available on PC – Lifehacker
‘As of Nov. 15, OpenAI has finally rolled out its ChatGPT app for all users on Windows. Whether you pay for ChatGPT or don’t, or run Windows 10 or Windows 11, you can use the dedicated ChatGPT experience on your PC. To get it, head to OpenAI’s download page, and click the Download for Windows option. Or, head directly…
-
How OpenAI stress-tests its large language models – MIT Technology Review
‘OpenAI is once again lifting the lid (just a crack) on its safety-testing processes. Last month the company shared the results of an investigation that looked at how often ChatGPT produced a harmful gender or racial stereotype based on a user’s name. Now it has put out two papers describing how it stress-tests its powerful large language models to…