DEV Community

# llm

Posts

👋 Sign in for the ability to sort posts by relevant, latest, or top.
Stop Fine-Tuning Blindly: When to Fine-Tune—and When Not to Touch Model Weights

Stop Fine-Tuning Blindly: When to Fine-Tune—and When Not to Touch Model Weights

Comments
6 min read
Getting Started with @neuledge/context

Getting Started with @neuledge/context

1
Comments
6 min read
Your AI Reviewer Has the Same Blind Spots You Do

Your AI Reviewer Has the Same Blind Spots You Do

1
Comments
4 min read
How I use AmpCode for free

How I use AmpCode for free

Comments
3 min read
Why 1M token context windows won't solve agent amnesia

Why 1M token context windows won't solve agent amnesia

Comments
3 min read
AI Inference Cost Calculator: The Hidden Reality of Production AI Costs

AI Inference Cost Calculator: The Hidden Reality of Production AI Costs

Comments
4 min read
I don’t hate SQL. I hate metadata friction.

I don’t hate SQL. I hate metadata friction.

Comments
4 min read
I Built a Thing That Builds Things: Tres Comas Scrum

I Built a Thing That Builds Things: Tres Comas Scrum

Comments
3 min read
You Don’t “Prompt Engineer” Identity — You Architect It (Why CloYou Explores Constrained AI Clones)

You Don’t “Prompt Engineer” Identity — You Architect It (Why CloYou Explores Constrained AI Clones)

Comments
4 min read
The Symbol for All of Us is Null

The Symbol for All of Us is Null

Comments
6 min read
LLM Audit for Developers: A 30-Minute Self-Check Before You Tune That Prompt Again

LLM Audit for Developers: A 30-Minute Self-Check Before You Tune That Prompt Again

5
Comments
4 min read
Circuit Breakers for LLM APIs: Applying SRE Patterns to AI Infrastructure

Circuit Breakers for LLM APIs: Applying SRE Patterns to AI Infrastructure

Comments
6 min read
Fazendo um LLM do Zero — Sessão 06: Dando uma Profissão ao Modelo (Fine-Tuning) 🎯👨‍⚕️

Fazendo um LLM do Zero — Sessão 06: Dando uma Profissão ao Modelo (Fine-Tuning) 🎯👨‍⚕️

Comments
4 min read
How to Implement Prompt Caching on Amazon Bedrock and Cut Inference Costs in Half

How to Implement Prompt Caching on Amazon Bedrock and Cut Inference Costs in Half

Comments
12 min read
Securing AI-Powered Applications: A Comprehensive Guide to Protecting Your LLM-Integrated Web App

Securing AI-Powered Applications: A Comprehensive Guide to Protecting Your LLM-Integrated Web App

Comments
8 min read
👋 Sign in for the ability to sort posts by relevant, latest, or top.