Showing posts with label LLMs. Show all posts
Showing posts with label LLMs. Show all posts

Monday, December 9

Technology Trends for 2025


👋 Hi, I am Mark. I am a strategic futurist and innovation keynote speaker. I advise governments and enterprises on emerging technologies such as AI or the metaverse. My subscribers receive a free weekly newsletter on cutting-edge technology.


Time flies when experiencing exponential change! With 2025 approaching, it's time for my annual technology trend predictions—a tradition I've maintained since 2012 and one I deeply enjoy. Writing these articles allows me to reflect on the past year's predictions and explore the future with renewed curiosity.

In 2024, I named the year “The Year of Science Reality,” as technologies once confined to science fiction became tangible. Looking back, most of my predictions aligned closely with the developments we witnessed, though some advanced faster than anticipated, while others remain on the horizon. 

Here’s a quick recap:
Spot-on Predictions: 
Humanoids integrated with multimodal LLMs, exemplified by Figure 1’s conversational capabilities, and conversational IoT devices, like LG’s empathetic AI home hub, validated my expectations. Similarly, the rise of deepfakes underscored the growing “fake reality,” and Edge AI adoption accelerated, enabling real-time processing on devices.

Emerging Trends: 
AI-quantum computing convergence shows promise but remains in its infancy, and some researchers now question whether we need quantum computing if we have advanced AI. Synthetic biology, though advancing, is yet to enter the mainstream.     READ MORE...

Monday, May 22

AI on the Dark Web


OpenAI's large language models (LLMs) are trained on a vast array of datasets, pulling information from the internet's dustiest and cobweb-covered corners.

But what if such a model were to crawl through the dark web — the internet's seedy underbelly where you can host a site without your identity being public or even available to law enforcement — instead? A team of South Korean researchers did just that, creating an AI model dubbed DarkBERT to index some of the sketchiest domains on the internet.

It's a fascinating glimpse into some of the murkiest corners of the World Wide Web, which have become synonymous with illegal and malicious activities from the sharing of leaked data to the sale of hard drugs.

It sounds like a nightmare, but the researchers say DarkBERT has noble intentions: trying to shed light on new ways of fighting cybercrime, a field that has made increasing use of natural language processing.

Perhaps unsurprisingly, making sense of the parts of the web that aren't indexed by search engines like Google and often can only be accessed via specific software wasn't an easy task.

As detailed in a yet-to-be-peer-reviewed paper titled "DarkBERT: A language model for the dark side of the internet," the team hooked their model up to the Tor network, a system for accessing parts of the dark web. It then got to work, creating a database of the raw data it found.

The team says their new LLM was far better at making sense of the dark web than other models that were trained to complete similar tasks, including RoBERTa, which Facebook researchers designed back in 2019 to "predict intentionally hidden sections of text within otherwise unannotated language examples," according to an official description.

"Our evaluation results show that DarkBERT-based classification model outperforms that of known pretrained language models," the researchers wrote in their paper.  READ MORE...