Artificial intelligence: benefits and drawbacks
Reflections
“In the future AI will be used to automate more tasks as organisations seek to improve services and reduce costs. However, there will be many repercussions that impact workers, privacy, safety, security and property rights. As a result, legal protection of rights will require new laws as governments work through the complex issues raised by increasing AI usage and applications,” says Karen Higgins from The Parish of Freshwater
Artificial intelligence (AI) is a field of computer science that endeavours to make computers act intelligently, like a person. This involves programming computers to perform tasks that typically require human intelligence, such as understanding language, recognising patterns, making decisions and solving problems.
There are different types of AI, ranging from basic rule-based systems that follow predefined instructions to more advanced systems that can learn from data and adapt over time. One common approach in AI is machine learning, where computers are trained on large amounts of data to recognise patterns and make predictions or decisions without being programmed for every possible scenario.
AI is used in many applications, such as recommendation systems such as those used by Netflix or Amazon (to recommend content and products), autonomous vehicles, medical diagnosis, and much more.
AI is already significantly impacting our everyday lives, in positive and negative ways, because there are both significant benefits and drawbacks to AI.
The positive aspects of AI include:
- Efficiency because AI can perform tasks much faster and with fewer errors than people, leading to increased productivity and efficiency in various industries.
- Automation because AI can automate repetitive tasks, freeing up workers to focus on more creative and high-level work.
- 24/7 availability because AI systems can operate around the clock without needing breaks, making them ideal for tasks that require continuous monitoring or support.
- Data analysis because AI can analyse large amounts of data quickly and identify patterns or insights that people might miss, leading to better decision making in fields such as finance, healthcare and marketing.
- Personalisation because AI-powered systems can personalise experiences for users based on their preferences and behavior, such as personalised recommendations in streaming services or targeted advertising.
In a Church context, St John’s Cathedral, as part of its Resource Church work, is exploring the dual impact of AI on faith communities, examining both its potential to enhance community engagement and accessibility, as well as the ethical concerns and challenges it presents.
Some negative aspects of AI include:
- Job displacement — because AI automates tasks previously done by people, it can lead to job loss and displacement, particularly for tasks that are routine or repetitive.
- Increased bias and compromised fairness because AI systems can perpetuate or amplify biases present in the data they are programmed on, leading to unfair or discriminatory outcomes, such as in hiring algorithms or criminal justice systems.
- Privacy concerns because AI systems often rely on large amounts of personal data, raising concerns about privacy and data security. For example, the Australian Information Commissioner and Privacy Commissioner found that Clearview AI, Inc. breached Australians’ privacy by scraping biometric information from the Internet and disclosing it via a facial recognition tool.
- Dependency because reliance on AI systems can lead to dependency and reduce peoples’ skills and capabilities in certain areas, potentially making societies more vulnerable to system failures or disruptions.
- Ethical dilemmas because AI raises complex ethical questions, such as not being able to ascertain whether an image or voice is AI generated or from an actual photo or person; the use of AI in surveillance; and, the potential loss of human control over decision-making processes.
To address the ethical issues, the Australian Government has developed an AI Ethics Framework. This framework outlines voluntary principles and guidance for the ethical design, development and use of AI technologies. However, the framework is not enforceable by law. People who feel harmed or disadvantaged by AI must look to existing laws for protection, such as privacy legislation and consumer protection legislation.
One area viewed as ripe for AI efficiency is legal research and legal writing. This work is currently performed by paralegals and junior lawyers. Legal research involves looking up relevant legislation, regulations and court decisions, identifying conflicting issues, and assembling the information into an informative and concise document. AI is an excellent tool for this purpose. There are already commercial applications for AI legal research that law firms can subscribe to. The applications can have additional features, including drafting of contracts and briefs, as well as tracking time and billing.
Advertisement
There is debate over what extent AI will replace the jobs of paralegals and junior lawyers. Some say AI will augment legal staff and that there will always be a need for peoples’ skill and judgement. Others are of the opinion that many paralegal roles will be made redundant.
AI work-product must be carefully checked by a person. Reuters reports on a well-known case in the US in which two New York lawyers submitted false case citations generated by ChatGPT. The lawyers represented a man suing an airline for personal injury. The judge fined the lawyers, while dismissing the case against the airline because it was filed too late.
“Discovery” is a process in litigation in which each side shares records. A major case can have millions of documents, including texts and emails, to review. Thorough checking of documents is essential because they may have relevant evidence — evidence that could change the outcome of a case. AI is an essential tool for document review because it is less expensive than having people read each document. AI is also more accurate at finding words in the search terms.
Related Story
ReflectionsArtificial intelligence and the human condition
Art is another industry impacted by AI. DALL-E 3 and Midjourney are two popular online applications that generate images, given a prompt. One can write a basic prompt to “create an image of an oil painting of a night sky”, and such an image will quickly be created. One can also add more details to the prompt, such as “in the style of van Gogh”, and images similar to The Starry Night will appear.
In the case of long-deceased artists, this practice has become widely accepted. However, AI is trained in both historic and contemporary art. Many living artists whose styles are commonly used in AI object on the basis that they should be remunerated for their work being imitated. Media companies whose copyrighted works have been used for AI training also object. Some artists and media companies have filed lawsuits against AI companies over copyright violations. The outcome of these lawsuits will be important in establishing boundaries on generative AI training and output.
Generative AI has been used to create image and voice impersonations of people, particularly of celebrities, known as “deepfakes”. Deepfake pornographic images of Taylor Swift were uploaded to the social media platform X earlier this year and were viewed 47 million times before being taken down. A celebrity can use their media reach to mitigate reputational damage, but when deepfakes are rendered of ordinary people, it is difficult to address and overcome. The impacts of this abuse are very distressing and such images may never be completely eliminated because that is the nature of the Internet.
As the Australian Government eSafety Commissioner states, the abuse is covered by criminal laws and it is still abuse whether the “image or video” is:
- “real
- altered or faked to look like you
- shared in a way that makes people think it’s you, even when it’s not (such as a nude of someone else tagged with your name).”
AI is already present in our everyday lives. It is used to summarise product reviews, create suggestions for text and email replies, and make streaming platform and shopping recommendations. Virtual assistants like Siri and Alexa use AI to respond to people’s requests. Navigation apps like Google Maps and Waze leverage AI algorithms to provide real-time traffic updates, route optimisation and predictive traffic analysis. AI-powered chatbots provide personalised financial advice and support to customers. So AI tools can be useful in our everyday lives.
In the future AI will be used to automate more tasks as organisations seek to improve services and reduce costs.
However, there will be many repercussions that impact workers, privacy, safety, security and property rights. As a result, legal protection of rights will require new laws as governments work through the complex issues raised by increasing AI usage and applications.