Thoughts on AI?

In higher education, generative AI is changing how we teach, learn, and innovate. On campuses, this ground-breaking technology is generating both enthusiasm and anxiety because it can produce anything from essays to intricate statistics. Students and faculty at Thompson Rivers University are tackling the obstacles it poses while discovering its enormous potential.

On the one hand, generative AI provides effective tools for efficiency and teamwork. Students may now use AI to co-create creative ideas, automate repetitive jobs, and tackle tough data analysis. It gives academics access to individualized instruction and innovative research opportunities. According to an AI major who graduated from TRU, “Generative AI helped me analyze datasets and streamline my projects,” “It’s about learning to collaborate with AI, not just use it.”

But the emergence of AI also raises important issues. Given that AI technologies can produce essays and even impersonate student voices, how can academic institutions uphold academic integrity? Teachers are reframing exams to prioritise ethical use, creativity, and problem-solving over conventional memory-based testing.

The influence of generative AI in higher education will rely on how colleges strike a balance between creativity and accountability. Universities can open up new avenues for instruction and learning by carefully embracing these tools, preparing students for a time when artificial intelligence will play a key role in almost every industry.

See how AI is changing the TRU student experience by watching our video

By Rashid Chowdhury

Who’s in Control: AI and Human Oversight

Artificial Intelligence is everywhere, shaping industries and daily life. But as machines become more capable, a critical question arises: how do we maintain control? Without safeguards, AI systems can go astray—causing harm through biased decisions, unpredictable behavior, or even intentional misuse.

The debate over control is intense. Some experts call for strong regulations to ensure accountability and prevent harm, while others warn that excessive restrictions could slow innovation and limit AI’s potential. Striking the right balance is no small task.

Efforts are already underway. Developers are creating AI systems with built-in safety measures, while governments and researchers are pushing for ethical guidelines and rigorous audits. The goal? To create AI that operates reliably while keeping humans in charge.

This issue isn’t just about technology—it’s about trust. Can we design systems that respect human values and avoid unintended consequences? It’s a challenge that affects us all.

By asking tough questions and exploring solutions, we can better understand how to keep humanity at the center of decision-making. Watch our video to learn more about the work being done to ensure AI remains a tool for progress, not a force beyond our control.

Mo Gawdat | AI + Happiness. (n.d.). Mo Gawdat. https://www.mogawdat.com

Interview Clip – Tom Bilyeu. (2023, June 20th). “Life As We Know It Will Will Be Gone Soon” – Dangers Of AI & Humanity’s Future | Mo Gawdat [Video]. Youtube. https://www.youtube.com/watch?v=itY6VWpdECc

All stock videos are sources from https://www.pexels.com and are all free to use.

By Hamza Ahmad