Skip to content

Artificial Intelligence: Friend or Foe?

Among the fastest-growing branches in the world in the last year have been AI and artificial intelligence. In the last year, AI has come to the forefront of conversations among politicians, big businesses, and philosophers. 2023 was a breakthrough year for photo-manipulating technology as well as article creation programs, including ChatGPT, creating countless situations of ethical and professional concern. 

Researchers at schools, including the University of Toronto, have been using AI to advance scientific discovery while also working to ensure this rapidly advancing technology aligns with human values. 2024 will prove substantial in the new era of handling content generated by AI and the ability to discern what was actually created by the human mind. 

AI is presently reshaping how industries around the world promote themselves and create content. It is also creating challenges that we must confront. With the rapid development of applicable functions of AI, it is the newfound obligation of regulators and lawmakers to keep pace. 

The Canadian government is set to pass the Artificial Intelligence and Data Act (AIDA) later this year. This will be the first legislation to regulate AI and will place responsibilities on businesses using AI technology. 

“For businesses, this means clear rules to help them innovate and realize the full potential of AI,” says a post on the Government of Canada website pertaining to AIDA. “For Canadians, it means AI systems used in Canada will be safe and developed with their best interest in mind.” 

“The Government of Canada remains actively engaged in international discussions on AI regulations and continues to work with partners around the world to drive collaboration and ensure alignment in the responsible development and use of AI.”

Another issue of AI is what constitutes fair use of AI-generated art based on pre-existing works of other artists. AI systems created more than 15 billion images last year, all of which drew from the art and photography of actual artists. AI cannot produce original work and can only create images based on prompt words and pre-existing images the system has access to. Many artists have seen their work republished as a piece of AI’ art’; in some instances, the artist’s signature or watermark can be seen in the AI-generated image. 

While there are benefits to using AI, there are also horrendous flaws. Deepfake images -AI-altered images, videos or audio recordings which create a seemingly real replication of a person- have created havoc in the lives of some. Many such situations involve women whose images have been used without their consent to create content depicting them in explicit and sexualized ways. One such situation in the United States involving a minor has led to a lawsuit. 

Deepfakes can also involve politicians, a concern voiced by many as the presidential election is set to take place in November. While tech giants pledge to actively combat the creation and distribution of deepfake videos and images, it is unclear how effective any company can be at this time. The creation of deepfake images can be used to create propaganda that can convince viewers that certain individuals have made statements that they have never actually made. Discerning deepfake footage or photographs from reality can, at times, be easy, while in other instances, it is more difficult to determine whether or not they are authentic. As the technology improves, determining what is real and what is AI-generated will only become more difficult. 

Teachers around the world are confronted with a new world of tools provided by AI and many detrimental aspects. Among the issues universities and high schools are facing is the number of essays teachers are seeing that have been written by AI. The trouble for teachers and students is determining what has been co-written or written entirely by an AI program such as ChatGPT.  

While there have been several instances of students caught using AI to write essays, there have also been situations where professors have given failing grades to students who cannot prove they wrote their work honestly. At Texas A&M, a whole class failed for what the professor deemed to be the mass use of ChatGPT. Many students were denied their diplomas as a result of failing the class. Ultimately, the situation was that the professor was using AI detection software incorrectly. 

The University of Manitoba, like many other post-secondary institutions in Canada, has released specific guidelines relating to the improper use of AI. The U of M has gone so far as to suggest that professors not rely on AI sensing programs, as the results have been inaccurate in many cases. 

AI has also been used to write item descriptions for online products and articles published on reputable websites. In November 2023, Sports Illustrated was in the collective crosshairs after it was exposed that several credited authors on their website were AI writers. This meant the writers were fictional, and every bit of content they were credited for had been AI-generated, according to Futurism, a science and technology news outlet. The ‘authors’ even had personal biographies and photos, all AI-generated. Founded in 1954, Sports Illustrated was once a Titan of the sports journalism industry. In January, the outlet saw massive layoffs, leading many to speculate about the future of the giant. This may hint at why they chose to use fake writers rather than employing journalists. 

While Sports Illustrated tried to publish their AI authors in secret, BuzzFeed has taken to openly publishing AI-generated work. They have published AI-generated travel guides and quizzes and are still experimenting with the idea. Such decisions create anxiety among many in the industry as laying off human journalists and writers in place of AI removes humanity from the publications. Buzzfeed is still reeling from shutting down their BuzzFeed News branch last year, laying off 15 per cent of their workforce. 

The History of AI 

In the 1950s, Alan Turing explored the mathematical potential of artificial intelligence, suggesting that computers could be designed to make decisions based on available information, similar to humans. In 1950, he wrote a paper titled Computing Machinery and Intelligence, which explored the idea of discussing how to build intelligent machines. 

Computers at that point in time were costly to operate, could only execute commands, and did not have the capability to store memory. By 1956, a program called Logic Theorist, which could mimic human problem-solving skills was funded by the Research and Development Corporation and presented at the Dartmouth Summer Research Project on Artificial Intelligence, organized to bring together many minds to explore AI. Logic Theorist is considered by many to be the first artificial intelligence. 

Research in the field continued through the decades. In the 1980s, the concept of deep learning was popularized, allowing computers to learn from experience. Government funding came and went, all the while expanding the field and inspiring new researchers. By 1997, a chess-playing computer program created by IBM called Deep Blue defeated world chess champion Gary Kasparov, the first time a reigning world chess champion was defeated by a computer. 

AI today is rapidly developing. The laws we create today will not account for the problems which may still arise. 

– Matthew Harrison, U Multicultural

Jelynn dela Cruz: Youngest Woman MLA in Manitoba History

https://youtu.be/OCZLMSeqNWI Brought up by Filipino frontline worker parents, Jelynn dela Cruz embodies the values that shaped her character.  Inspired by Philippine’s national hero, whose death anniversary falls on her birthday, MLA Jelynn Dela Cruz uses her position to empower and uplift her fellow Filipinos in Manitoba. Before becoming one of the youngest MLAsContinue Reading

Read More »

Understanding Microaggression

https://youtu.be/eJKS6EURuZY Robyn Penner, a champion for fairness and acceptance, shares her experiences and knowledge about a tricky issue: microaggressions. These little comments might seem small but can hurt people, especially those from different cultures or backgrounds.  Penner’s journey started with her family, comprised of people from different cultures. She noticedContinue Reading

Read More »

Share this post with your friends

Subscribe to Our Newsletter