Thoughts on AI?

In higher education, generative AI is changing how we teach, learn, and innovate. On campuses, this ground-breaking technology is generating both enthusiasm and anxiety because it can produce anything from essays to intricate statistics. Students and faculty at Thompson Rivers University are tackling the obstacles it poses while discovering its enormous potential.

On the one hand, generative AI provides effective tools for efficiency and teamwork. Students may now use AI to co-create creative ideas, automate repetitive jobs, and tackle tough data analysis. It gives academics access to individualized instruction and innovative research opportunities. According to an AI major who graduated from TRU, “Generative AI helped me analyze datasets and streamline my projects,” “It’s about learning to collaborate with AI, not just use it.”

But the emergence of AI also raises important issues. Given that AI technologies can produce essays and even impersonate student voices, how can academic institutions uphold academic integrity? Teachers are reframing exams to prioritise ethical use, creativity, and problem-solving over conventional memory-based testing.

The influence of generative AI in higher education will rely on how colleges strike a balance between creativity and accountability. Universities can open up new avenues for instruction and learning by carefully embracing these tools, preparing students for a time when artificial intelligence will play a key role in almost every industry.

See how AI is changing the TRU student experience by watching our video

By Rashid Chowdhury

Who’s in Control: AI and Human Oversight

Artificial Intelligence is everywhere, shaping industries and daily life. But as machines become more capable, a critical question arises: how do we maintain control? Without safeguards, AI systems can go astray—causing harm through biased decisions, unpredictable behavior, or even intentional misuse.

The debate over control is intense. Some experts call for strong regulations to ensure accountability and prevent harm, while others warn that excessive restrictions could slow innovation and limit AI’s potential. Striking the right balance is no small task.

Efforts are already underway. Developers are creating AI systems with built-in safety measures, while governments and researchers are pushing for ethical guidelines and rigorous audits. The goal? To create AI that operates reliably while keeping humans in charge.

This issue isn’t just about technology—it’s about trust. Can we design systems that respect human values and avoid unintended consequences? It’s a challenge that affects us all.

By asking tough questions and exploring solutions, we can better understand how to keep humanity at the center of decision-making. Watch our video to learn more about the work being done to ensure AI remains a tool for progress, not a force beyond our control.

Mo Gawdat | AI + Happiness. (n.d.). Mo Gawdat. https://www.mogawdat.com

Interview Clip – Tom Bilyeu. (2023, June 20th). “Life As We Know It Will Will Be Gone Soon” – Dangers Of AI & Humanity’s Future | Mo Gawdat [Video]. Youtube. https://www.youtube.com/watch?v=itY6VWpdECc

All stock videos are sources from https://www.pexels.com and are all free to use.

By Hamza Ahmad

Who’s to Blame? The Accountability Dilemma in AI

Hello, Hi, Hey, Welcome to the first episode of the Pixel Press Podcast where we’re tackling a question that gets at the heart of AI ethics: Who’s responsible when AI goes wrong? As AI systems make more decisions in our lives—from job interviews to healthcare recommendations, the stakes are high. When an algorithm denies someone a loan or makes an error in a critical medical diagnosis, where does the blame fall?

Our guest, Dr. Joseph Alexander Brown, an expert in AI ethics, takes us inside the challenges of assigning responsibility in a world where machines are making high-stakes decisions. Who should be held accountable: the programmers, the companies, or the AI itself? Dr. Joseph Alexander Brown explains the ethical and legal roadblocks, the importance of transparency, and the role of future regulations in making AI systems more accountable.

Join us as we navigate the complex and often blurry landscape of AI accountability. Hit play to hear Dr. Joseph Alexander Brown break down this fascinating, sometimes troubling, side of artificial intelligence.

Press play to get informed on the future of AI and the challenge of responsibility.

Written By Hamza Ahmad

AI Deepfake Ruining Lives

The use of AI deepfakes involving deceased individuals touches on deeply personal and societal questions about privacy, ethics, and legacy. A deepfake allows a digital recreation of someone’s likeness, voice, and mannerisms, making it possible to create realistic simulations of people in ways they never participated in. While some see this as a breakthrough that can bring back historical figures or extend the legacies of beloved entertainers, it raises complex concerns about control over a person’s image, especially once they’re no longer here to give consent.

For instance, movie studios have used deepfake technology to digitally recreate actors who have passed away, allowing them to “appear” in films years after their death. Fans often find this captivating, as it provides a sense of connection to stars they admired. Yet, this approach blurs the line between tribute and exploitation. Critics argue that without clear consent from the deceased or their family, these recreations can feel like a misuse of someone’s legacy for financial gain. The ethical stakes rise even higher when deepfakes of historical figures or political leaders are used in media or educational content. It’s important to consider whether altering their likeness or words distorts history or creates misleading interpretations of their actions or beliefs.

Beyond the entertainment industry, there’s also a growing unease about how AI deepfakes can be weaponized. With technology that can mimic deceased individuals, there’s potential for misleading propaganda, false news, or even malicious attacks against a person’s reputation. In this digital age, families may find themselves navigating complex legal battles to protect the memory of their loved ones. Without clear regulations, AI deepfakes raise thorny questions about who truly owns a person’s likeness once they’re no longer alive to control it.

Photo Credit: Chris Ume is a Freelance VFX and AI Artist. He created the Deepcruise series from where the first picture is taken.

Photo Credit 2: No author is credit in the article however the photo is provided by Deepfake Web Blog

Photo Credit 3: Travis Schreiber a journelist at GuaranteedRemoval did an article on AI deepfaking and provided the photo of Jennifer Lopez

By Hamza Ahmad