anglican focus

The news site of the Anglican Church Southern Queensland: nourishing and connecting our faith community

Artificial intelligence: benefits and drawbacks

Reflections

“In the future AI will be used to automate more tasks as organisations seek to improve services and reduce costs. However, there will be many repercussions that impact workers, privacy, safety, security and property rights. As a result, legal protection of rights will require new laws as governments work through the complex issues raised by increasing AI usage and applications,” says Karen Higgins from The Parish of Freshwater

Print article

Artificial intelligence (AI) is a field of computer science that endeavours to make computers act intelligently, like a person. This involves programming computers to perform tasks that typically require human intelligence, such as understanding language, recognising patterns, making decisions and solving problems.

There are different types of AI, ranging from basic rule-based systems that follow predefined instructions to more advanced systems that can learn from data and adapt over time. One common approach in AI is machine learning, where computers are trained on large amounts of data to recognise patterns and make predictions or decisions without being programmed for every possible scenario.

AI is used in many applications, such as recommendation systems such as those used by Netflix or Amazon (to recommend content and products), autonomous vehicles, medical diagnosis, and much more.

AI is already significantly impacting our everyday lives, in positive and negative ways, because there are both significant benefits and drawbacks to AI.

The positive aspects of AI include:

In a Church context, St John’s Cathedral, as part of its Resource Church work, is exploring the dual impact of AI on faith communities, examining both its potential to enhance community engagement and accessibility, as well as the ethical concerns and challenges it presents.

Some negative aspects of AI include:

To address the ethical issues, the Australian Government has developed an AI Ethics Framework. This framework outlines voluntary principles and guidance for the ethical design, development and use of AI technologies. However, the framework is not enforceable by law. People who feel harmed or disadvantaged by AI must look to existing laws for protection, such as privacy legislation and consumer protection legislation.

One area viewed as ripe for AI efficiency is legal research and legal writing. This work is currently performed by paralegals and junior lawyers. Legal research involves looking up relevant legislation, regulations and court decisions, identifying conflicting issues, and assembling the information into an informative and concise document. AI is an excellent tool for this purpose. There are already commercial applications for AI legal research that law firms can subscribe to. The applications can have additional features, including drafting of contracts and briefs, as well as tracking time and billing.

There is debate over what extent AI will replace the jobs of paralegals and junior lawyers. Some say AI will augment legal staff and that there will always be a need for peoples’ skill and judgement. Others are of the opinion that many paralegal roles will be made redundant.

AI work-product must be carefully checked by a person. Reuters reports on a well-known case in the US in which two New York lawyers submitted false case citations generated by ChatGPT. The lawyers represented a man suing an airline for personal injury. The judge fined the lawyers, while dismissing the case against the airline because it was filed too late.

“Discovery” is a process in litigation in which each side shares records. A major case can have millions of documents, including texts and emails, to review. Thorough checking of documents is essential because they may have relevant evidence — evidence that could change the outcome of a case. AI is an essential tool for document review because it is less expensive than having people read each document. AI is also more accurate at finding words in the search terms.

Art is another industry impacted by AI. DALL-E 3 and Midjourney are two popular online applications that generate images, given a prompt. One can write a basic prompt to “create an image of an oil painting of a night sky”, and such an image will quickly be created. One can also add more details to the prompt, such as “in the style of van Gogh”, and images similar to The Starry Night will appear.

In the case of long-deceased artists, this practice has become widely accepted. However, AI is trained in both historic and contemporary art. Many living artists whose styles are commonly used in AI object on the basis that they should be remunerated for their work being imitated. Media companies whose copyrighted works have been used for AI training also object. Some artists and media companies have filed lawsuits against AI companies over copyright violations. The outcome of these lawsuits will be important in establishing boundaries on generative AI training and output.

Generative AI has been used to create image and voice impersonations of people, particularly of celebrities, known as “deepfakes”. Deepfake pornographic images of Taylor Swift were uploaded to the social media platform X earlier this year and were viewed 47 million times before being taken down. A celebrity can use their media reach to mitigate reputational damage, but when deepfakes are rendered of ordinary people, it is difficult to address and overcome. The impacts of this abuse are very distressing and such images may never be completely eliminated because that is the nature of the Internet.

As the Australian Government eSafety Commissioner states, the abuse is covered by criminal laws and it is still abuse whether the “image or video” is:

AI is already present in our everyday lives. It is used to summarise product reviews, create suggestions for text and email replies, and make streaming platform and shopping recommendations. Virtual assistants like Siri and Alexa use AI to respond to people’s requests. Navigation apps like Google Maps and Waze leverage AI algorithms to provide real-time traffic updates, route optimisation and predictive traffic analysis. AI-powered chatbots provide personalised financial advice and support to customers. So AI tools can be useful in our everyday lives.

In the future AI will be used to automate more tasks as organisations seek to improve services and reduce costs.

However, there will be many repercussions that impact workers, privacy, safety, security and property rights. As a result, legal protection of rights will require new laws as governments work through the complex issues raised by increasing AI usage and applications.

More Reflections stories

Loading next article