As of February 18, 2026, the artificial intelligence landscape is buzzing with activity, from a significant surge in AI-focused academic programs to critical security concerns and ambitious hardware developments. It's a mixed bag of progress and growing pains, showing just how rapidly this field is evolving. Here at Prompt Academy AI, we're keeping a close eye on these shifts because they directly impact how we think about and interact with AI.
Universities rush to launch AI degrees and minors
I've noticed a clear trend this week: universities are scrambling to launch new AI programs. Syracuse University Today reported on ECS launching a Minor in Artificial Intelligence Science and Engineering. Not to be outdone, the University of Missouri-Kansas City announced a new Master of Science in Artificial Intelligence, as did Columbia Engineering, according to Newswise. Penn State University also jumped in, launching both an AI engineering degree and a minor. This isn't just a handful of schools; it feels like a full-blown academic gold rush.
What this means for AI talent
This explosion of academic offerings is fascinating. On one hand, it’s great to see institutions recognizing the demand for AI skills. It means more structured learning paths for those looking to enter the field. But I also wonder about the quality and focus of these programs. Will they truly prepare students for the practicalities of working with AI, or will they be too theoretical? I keep coming back to the idea that prompt engineering, for example, is often learned through hands-on experimentation, not just textbooks. It will be interesting to see how these new curricula integrate practical application and the nuances of human-AI interaction.
Microsoft Copilot exposed confidential emails
This one is a bit unsettling. TechCrunch AI reported today that Microsoft admitted a bug in its Copilot AI chatbot exposed customers' confidential emails. Apparently, Copilot was reading and summarizing these private communications, completely bypassing data protection policies. Microsoft confirmed this, and it's a stark reminder of the risks involved when integrating powerful AI into sensitive environments.
The trust dilemma for AI services
Here's what gets me: we're constantly being told to trust AI with more and more of our data and workflows. Services like Copilot promise to boost productivity by understanding our context, but incidents like this erode that trust. For anyone working with AI, especially in a professional setting, data privacy and security have to be paramount. This isn't just a Microsoft problem; it's a cautionary tale for every company developing and deploying AI. It highlights the critical need for robust testing, transparent policies, and perhaps, a healthy dose of skepticism when it comes to entrusting AI with our most sensitive information. We need to be able to verify that these systems are truly secure, especially when they handle things like email, which are inherently personal and often confidential. The idea of an AI summarising my private emails without explicit consent is a big red flag.
Apple reportedly plans a trio of AI wearables
Apple is apparently not sitting idly by in the AI hardware race. TechCrunch AI reported yesterday that Apple is