- WhatsApp’s big update fixes the three things users have been yelling about for years
- Save up to £250 on these Hypershell exoskeletons for a limited time
- Senators Demand to Know How Much Energy Data Centers Use
- Our favorite Apple tech is cheaper during Amazon’s Big Spring Sale
- 20+ Solved AI Projects to Boost Your Resume
- Some YouTube users are running into annoying CAPTCHA loops before watching videos
- My Windows 11 storage was filling up constantly until I found these hidden cleanup tools
- Watch James Cameron Get Really Hands-On Filming ‘Avatar: Fire and Ash’
Browsing: FineTuning
Reinforcement fine-tuning on Amazon Bedrock with OpenAI-Compatible APIs: a technical walkthrough
In December 2025, we announced the availability of Reinforcement fine-tuning (RFT) on Amazon Bedrock starting with support for Nova models. This was followed by extended support…
This AI Paper Introduces TinyLoRA, A 13-Parameter Fine-Tuning Method That Reaches 91.8 Percent GSM8K on Qwen2.5-7B
Researchers from FAIR at Meta, Cornell University, and Carnegie Mellon University have demonstrated that large language models (LLMs) can learn to reason using a remarkably small…
Unsloth AI Releases Unsloth Studio: A Local No-Code Interface For High-Performance LLM Fine-Tuning With 70% Less VRAM Usage
The transition from a raw dataset to a fine-tuned Large Language Model (LLM) traditionally involves significant infrastructure overhead, including CUDA environment management and high VRAM requirements.…
This post is a collaboration between AWS, NVIDIA and Heidi. Automatic speech recognition (ASR), often called speech-to-text (STT) is becoming increasingly critical across industries like healthcare,…
How to Build a Stable and Efficient QLoRA Fine-Tuning Pipeline Using Unsloth for Large Language Models
In this tutorial, we demonstrate how to efficiently fine-tune a large language model using Unsloth and QLoRA. We focus on building a stable, end-to-end supervised fine-tuning…
Foundation models deliver impressive out-of-the-box performance for general tasks, but many organizations need models to consume their business knowledge. Model customization helps you bridge the gap…
Enterprises are increasingly shifting from relying solely on large, general-purpose language models to developing specialized large language models (LLMs) fine-tuned on their own proprietary data. Although…
Advanced fine-tuning techniques for multi-agent orchestration: Patterns from Amazon at scale
Our work with large enterprise customers and Amazon teams has revealed that high stakes use cases continue to benefit significantly from advanced large language model (LLM)…
This post is co-written with Sunaina Kavi, AI/ML Product Manager at Omada Health. Omada Health, a longtime innovator in virtual healthcare delivery, launched a new nutrition…
Liquid Foundation Models (LFM 2) define a new class of small language models designed to deliver strong reasoning and instruction-following capabilities directly on edge devices. Unlike…
