- Googlebook hasn’t shown if Google knows how to build a laptop
- You might not need Netflix if you have a Roku — here’s what’s free
- Why I stopped using third-party antivirus on Windows
- Motorola Razr Fold display review: Make one of the best foldable displays even better
- Honor Robot Phone launches in Q3 2026 with ARRI camera
- Vercel Labs Introduces Zero, a Systems Programming Language Designed So AI Agents Can Read, Repair, and Ship Native Programs
- Why Claude feels more human to talk to than ChatGPT, and what that actually means
- DaVinci Resolve is the Adobe Lightroom replacement I didn’t know I needed
Browsing: Qwen
Qwen AI Releases Qwen-Scope: An Open-Source Sparse AutoEncoders (SAE) Suite That Turns LLM Internal Features into Practical Development Tools
Large language models are remarkably capable, yet frustratingly opaque. When a model misbehaves — generating responses in the wrong language, repeating itself endlessly, or refusing safe…
Qwen Team Releases FlashQLA: a High-Performance Linear Attention Kernel Library That Achieves Up to 3× Speedup on NVIDIA Hopper GPUs
The race to make large language models faster and cheaper to run has largely been fought at two levels: the model architecture and the hardware. But…
Alibaba Qwen Team Releases Qwen3.6-27B: A Dense Open-Weight Model Outperforming 397B MoE on Agentic Coding Benchmarks
Alibaba’s Qwen Team has released Qwen3.6-27B, the first dense open-weight model in the Qwen3.6 family — and arguably the most capable 27-billion-parameter model available today for…
A Coding Implementation on Qwen 3.6-35B-A3B Covering Multimodal Inference, Thinking Control, Tool Calling, MoE Routing, RAG, and Session Persistence
class QwenChat: def __init__(self, model, processor, system=None, tools=None): self.model, self.processor = model, processor self.tokenizer = processor.tokenizer self.history: list[dict] = [] if system: self.history.append({“role”: “system”, “content”: system})…
Qwen Team Open-Sources Qwen3.6-35B-A3B: A Sparse MoE Vision-Language Model with 3B Active Parameters and Agentic Coding Capabilities
The open-source AI landscape has a new entry worth paying attention to. The Qwen team at Alibaba has released Qwen3.6-35B-A3B, the first open-weight model from the…
Alibaba Qwen Team Releases Qwen3.5 Omni: A Native Multimodal Model for Text, Audio, Video, and Realtime Interaction
The landscape of multimodal large language models (MLLMs) has shifted from experimental ‘wrappers’—where separate vision or audio encoders are stitched onto a text-based backbone—to native, end-to-end…
Mobile World Congress in Barcelona might be a European tech show, but for the past few years, the event has largely been dominated by Chinese phone…
Alibaba just released Qwen 3.5 Small models: a family of 0.8B to 9B parameters built for on-device applications
Alibaba’s Qwen team has released the Qwen3.5 Small Model Series, a collection of Large Language Models (LLMs) ranging from 0.8B to 9B parameters. While the industry…
Alibaba Qwen Team Releases Qwen 3.5 Medium Model Series: A Production Powerhouse Proving that Smaller AI Models are Smarter
The development of large language models (LLMs) has been defined by the pursuit of raw scale. While increasing parameter counts into the trillions initially drove performance…
Alibaba Qwen Team Releases Qwen3.5-397B MoE Model with 17B Active Parameters and 1M Token Context for AI agents
Alibaba Cloud just updated the open-source landscape. Today, the Qwen team released Qwen3.5, the newest generation of their large language model (LLM) family. The most powerful…
