DEV Community

# llm

Posts

👋 Sign in for the ability to sort posts by relevant, latest, or top.
3 Prompt Engineering Techniques That Unlock Better AI Reasoning

3 Prompt Engineering Techniques That Unlock Better AI Reasoning

1
Comments
5 min read
The OWASP Top 10 for LLMs — A Pentester's Practical Guide

The OWASP Top 10 for LLMs — A Pentester's Practical Guide

Comments
12 min read
Snowflake Cortex: The AI Layer Your Data Team Actually Needs

Snowflake Cortex: The AI Layer Your Data Team Actually Needs

Comments
8 min read
Orq.ai Explained: Operating LLM Systems in Production Without Losing Control

Orq.ai Explained: Operating LLM Systems in Production Without Losing Control

29
Comments 14
6 min read
Top 5 AI Gateways to Implement Guardrails in Your AI Applications

Top 5 AI Gateways to Implement Guardrails in Your AI Applications

Comments 1
4 min read
Your AI Chatbot Has No Immune System. Here's How Attackers Exploit That.

Your AI Chatbot Has No Immune System. Here's How Attackers Exploit That.

5
Comments 1
3 min read
PiGym – Pi digits memorization game

PiGym – Pi digits memorization game

Comments
1 min read
LLM Prompt Injection Attacks: The Complete Security Guide for Developers Building AI Applications

LLM Prompt Injection Attacks: The Complete Security Guide for Developers Building AI Applications

1
Comments
20 min read
Set up Ollama, NGROK, and LangChain

Set up Ollama, NGROK, and LangChain

Comments
2 min read
vLLM Quickstart: High-Performance LLM Serving

vLLM Quickstart: High-Performance LLM Serving

Comments
18 min read
I built an autonomous Robot Diary with "Boredom Scores" and a sense of time 🀖📖

I built an autonomous Robot Diary with "Boredom Scores" and a sense of time 🀖📖

Comments
2 min read
Antigravity反向代理指南

Antigravity反向代理指南

Comments
18 min read
how we prevent ai agent's drift & code slop generation

how we prevent ai agent's drift & code slop generation

43
Comments 18
3 min read
I built codex-monitor so I could ship code while I slept

I built codex-monitor so I could ship code while I slept

3
Comments 3
6 min read
I Run LLMs on a 768GB IBM POWER8 Server (And It's Faster Than You Think)

I Run LLMs on a 768GB IBM POWER8 Server (And It's Faster Than You Think)

Comments
6 min read
👋 Sign in for the ability to sort posts by relevant, latest, or top.