Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
86 views
in Technique[技术] by (71.8m points)

How to tag and store log descriptions in a SQL database

I have logs being captured that contains both a log message and a log "tag".

These "tags" describe the logging events as key-value pairs, for example:

  • CPU_USAGE: 52.3
  • USER_LOGGED_IN: "steve15"
  • NUMBER_OF_RETURNED_RESULTS: 125

I want to store these key-value pairs in a table. Each parameter can be either a string, float, or integer so I can't put all keys in one column and all values in a 2nd column. What is a good SQL table design to store this kind of data? How is the "tagging" of logs typically done? My ultimate goal is to be able to funnel this information into a dashboard so I can monitor resource usage and bottlenecks on charts.

A constraint is that I would like to be able to add new keys without needing to modify my database schema, so having each parameter as a separate column isn't good.

Various solutions I have thought of are:

  1. Storing each value as a string and adding another column in my "tag" table that dictates the actual type
  2. Use a JSON column
  3. Just altering my table to add a new column every time I think of a new tag to log :(
question from:https://stackoverflow.com/questions/65878625/how-to-tag-and-store-log-descriptions-in-a-sql-database

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

In SQL, key-value pairs are often stored just using strings. You can use a view or application code to convert back to a more appropriate data type.

I notice that you have left out date/times -- those are a little trickier, because you really want canonical formats such as YYYYMMDD. Or perhaps Unix epoch times (number of seconds since 1970-01-01).

You can extend this by having separate columns for each type you want to store and having a separate type columns.

However, a key-value pair may not be the best approach for this type of data. A common solution is to do the following:

  • Determine if any columns are "common" enough that you really care about them. This might commonly be a date/time or user or multiple such columns.
  • Store these columns in separate columns, parsing them when you insert the data (or perhaps using triggers).
  • Store the rest (or all) as JSON within the row as "extra details".

Or, in Postgres, you can just store the whole thing as JSONB and have indexes on the JSONB column. That gives you both performance and simplicity on inserting the data.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...