Working with the JSON data format

Downstream Processing

In addition to the tools that output and format JSON data for the user are several tools that provide downstream processing of JSON as input in other programs (post-processing).

If you receive JSON data via an interface, it is good programming practice to sanity check the received data before further processing. The sanity check includes two stages:

  • Syntactic correctness – is the spelling correct? Do all brackets (equal number of opening and closing brackets), commas, and quotation marks fit the bill?
  • Correctness of data fields – does the received data structure match the data definition (JSON schema)?

For the first question, it is best to use JSONLint, which is described earlier in this article. For the second stage, you need the JSON schema that describes the data structure. You then compare this description with the received data.

On json-schema.org, you will find an overview of validators [15], sorted by the various programming languages in which they were developed. For example, consider the validate-json tool implemented in PHP [16]. If you are more into Python, jsonschema [17] serves the same purpose. The call to the two tools is identical.

Defining the JSON Schema

Listing 10 shows the JSON schema with which you define the exact format of your data structure. The schema matches the book inventory used earlier in this article. The schema was stored in the book-inventory-schema.json file in the local directory.

Listing 10

JSON Schema

{
  "$schema": "http://json-schema.org/draft/2019-09/schema",
  "title": "Book",
  "type": "object",
  "required": ["author", "title", "publication"],
  "properties": {
    "author": {
      "type": "string",
      "description": "The author's name"
    },
    "title": {
      "type": "string",
      "description": "The book's title"
    },
    "publication": {
      "type": "number",
      "minimum": 0
    },
    "tags": {
      "type": "array",
      "items": {
        "type": "string"
      }
    }
  }
}

The schema definition references the JSON standard used (the draft from September 2019, in this case) in the second line. The definition contains a number of keywords. Table 3 explains these keywords in more detail; a complete list of all supported keywords is available at json-schema.org [18].

Table 3

JSON Keywords

Keyword

Description

$schema

Description of the schema specification

title

Title of the schema

type

Type of JSON data

properties

Properties of each value (key and values allowed for the field)

required

List of required properties

properties.type

Data type of an entry

properties.minimum

Minimum value of an entry

properties.maximum

Maximum value of an entry

properties.minLength

Minimum number of characters for an entry

properties.maxLength

Maximum number of characters for an entry

properties.pattern

Regular expression for a comparison with the value of an entry

The next task is to validate the records by checking whether they match the specified schema. Listing 11 shows a single record from the book inventory in readable format. The compact version of the record contains all the parentheses and fields in a single line.

Listing 11

JSON Record

{
  "author": "Stephen Fry",
  "title": "The Hippopotamus",
  "publication": 1994
}

The validate-json tool expects two parameters in the call, the dataset and the schema (Listing 12). If everything goes well, the output does not cause any further feedback (line 2); otherwise, validate-json grumbles (lines 4 and 5). To provoke the error message starting in line 4, we turned the numeric specification for the year of publication (1994) into a string "1994", which means that the data type in the dataset no longer matched the stored data type in the JSON schema. validate-json has every reason to complain.

Listing 12

Calling validate-json

01 $ validate-json record.json bookinventory-schema.json
02 $
03 $ validate-json record.json bookinventory-schema.json
04 JSON does not validate. Violations:
05 [publication] String value found, but a number is required

Some programming languages also offer suitable helper libraries. In Python, for example, you can use jsonschema, and for NodeJS, you can use the Express framework.

Processing JSON

The list of command-line tools and helpers that read, search on, and modify JSON data is quite extensive. We stopped counting after more than 20 entries (see Table 4 for a sample). Developer Ilya Sher maintains a useful, commented overview of options [19].

Table 4

Command-Line Tools

Tool

Application (selection)

faq, Xidel

Convert formats from and to JSON (BSON, Bencode, JSON, TOML, XML, YAML, etc.)

fx, gofx, jq, jid

Filter JSON data

jello

Filter JSON data with Python syntax

jtbl

Output to a table

Underscore

Processing via the command line

Jtbl, for example, takes JSON records and knits a pretty table from them. In Figure 7, you can see how this table looks for the book inventory. Each record is shown in a separate row. Jtbl can only cope with flat JSON structures. It cannot handle nesting so far.

Figure 7: Representing records as a table.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy Linux Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

  • Gimme Output

    Armed with just json.tool and jq, Charly preps the JSON data delivered by his Philips Hue bridge so that even humans can read it – an essential step towards improving the usability of his home automation system.

  • Perl: YouTube Statistics

    Hobby YouTuber Mike Schilli is interested in whether his videos go viral. What better way to check skyrocketing viewer numbers than letting a Perl script analyze the daily trends and watch for unexpected upswings?

  • Programming Snapshot – Alexa

    Asking Alexa only for built-in functions like the weather report gets old quickly, and add-on skills from the skills store only go so far. With a few lines of code, Mike teaches this digital pet some new tricks.

  • Programmatically change YouTube metadata

    Instead of manually editing the metadata of YouTube movies, video craftsman Mike Schilli dips into YouTube’s API spell book and lets a script automatically do the work.

  • Perl: Network Monitoring

    To discover possibly undesirable arrivals and departures on their networks, a Perl daemon periodically stores the data from Nmap scans and passes them on to Nagios via a built-in web interface.

comments powered by Disqus

Direct Download

Read full article as PDF:

Price $2.95

News