Think of a LogsQL query as a way to apply a series of filters to all your logs to find exactly what you need.
The easiest query is just typing a word. This will search for that word in the main log message field (_msg).
error
This finds every log entry that contains the word "error".
You can add more words to make your search more specific. When you just list filters one after another, LogsQL treats it as an AND operation, meaning the log must match all the conditions.
error database connection
This is the same as error AND database AND connection. The log must contain all three words.
By default, a query searches logs from all time, which can be slow. It's a best practice to always include a time filter.
_time:1h error
This is much more efficient. It finds logs with the word "error" that were ingested in the last 1 hour. You can use m for minutes, h for hours, and d for days (e.g., _time:15m, _time:3d).
- What it is: The
_msgfield is the primary, raw text of your log entry. It's the core message that your application produced, like"INFO: User 'admin' logged in successfully"or"Error: Failed to connect to database at 10.0.1.23". - How it's used: When you type a simple word filter like
errorwithout specifying a field, LogsQL automatically searches within the_msgfield. It's the default search target.
- What they are: These are not part of your raw log message. They are structured metadata (or labels) that are automatically added by your log collection agent (like Fluentd, Vector, Promtail, etc.) running in your Kubernetes cluster. They provide crucial context about where a log came from.
- Common examples include:
kubernetes.pod_name: The name of the specific pod that generated the log (e.g.,nginx-deployment-7d5dcf9f9d-abcde).kubernetes.namespace_name: The namespace the pod belongs to (e.g.,production,staging).kubernetes.container_name: The name of the container inside the pod (e.g.,nginx,my-app).kubernetes.host: The node (server) where the pod was running.kubernetes.labels.*: Any Kubernetes labels you've applied to the pod, which are great for filtering by application, team, or environment.
These fields are powerful because they let you slice and dice your logs with precision. For example, you can find all errors from a specific application in your production namespace.
Let's build the query for your specific request. You want to find logs where a particular field's value starts with a specific string. We'll use the kubernetes.pod_name field as our example.
Let's say you have an Nginx deployment and your pods are named nginx-web-xzy12, nginx-web-abc34, etc. You want to find all logs from any pod belonging to this deployment.
LogsQL has a specific filter for this called the Exact Prefix Filter.
field:: The name of the field you want to search in (e.g.,kubernetes.pod_name:).:=: This signifies an exact match."prefix": The string that the field value must start with. You should put it in quotes.*: The wildcard that means "followed by anything else".
-
Specify the time range: Let's look at the last 15 minutes.
_time:15m -
Add the field filter: We want to filter on the
kubernetes.pod_namefield.kubernetes.pod_name:="nginx-web-"* -
Combine them: Put them together to create the full query.
_time:15m kubernetes.pod_name:="nginx-web-"*
This query will efficiently find all log entries from the last 15 minutes that came from any pod whose name begins exactly with nginx-web-.
Let's combine this with a search for a specific word in the _msg field. Find all logs from the nginx-web- pods that contain the word denied.
_time:15m kubernetes.pod_name:="nginx-web-"* denied
This breaks down to:
- WHEN: In the last 15 minutes (
_time:15m) - WHERE: From a pod whose name starts with "nginx-web-" (
kubernetes.pod_name:="nginx-web-"*) - WHAT: That contains the word "denied" in its message (
denied)
Since your _msg field contains JSON, you can't just type level:error. LogsQL doesn't know about the level key inside the JSON string by default.
You first need to tell LogsQL to parse the JSON. The primary tool for this is the unpack_json pipe.
The | unpack_json command reads the JSON string from a field (by default, _msg) and "unpacks" its key-value pairs into new, first-class fields that you can filter on.
Before unpack_json:
VictoriaLogs sees one field:
_msg:{"level": "info", "component": "auth-service", "user_id": 42}
After | unpack_json:
VictoriaLogs now sees these fields for the rest of the query:
level:infocomponent:auth-serviceuser_id:42- And the original
_msgfield is still there.
Let's find all log entries with level equal to error.
-
Select a time range:
_time:1h -
Unpack the JSON from the
_msgfield:_time:1h | unpack_json -
Now, filter on the new
levelfield:_time:1h | unpack_json | level:error
That's it! The level:error filter works because the unpack_json pipe made the level field available.
The unpack_json pipe is smart about nested objects. It flattens them using dot notation.
If your _msg is:
{"event": {"type": "login", "success": false}, "client": {"ip": "192.168.1.100"}}
After | unpack_json, you get these fields:
event.type:loginevent.success:falseclient.ip:192.168.1.100
Now you can write a query to find all failed login attempts:
_time:30m | unpack_json | event.type:login event.success:false