Request Spotify dossiers and evaluate them with Go and R
Playback Length Statistics
Listing 3 needs to limit the maximum recorded play length of all the songs it analyzes to five minutes, because my streaming history also included 90-minute audio plays, which distorted the statistics beyond recognition. Filtering is handled by the recoding statement in line 4, which uses the condition jdata$msPlayed < 300000
to filter out all tracks over 300 seconds playing time from the jdata
dataframe before again assigning the result to the jdata
variable.
Recoding statements take place at both the row and the column level. The square brackets in line 4 contain the conditions, separated by a comma. The filter applies the first condition to each row, and the second to each column. The result is a dataframe, which can have both fewer rows and fewer columns. In this case, however, we only need to remove the rows, not columns, which is why the second part of the condition in square brackets after the comma remains empty. Yes, you need eagle eyes to read and understand R code correctly!
The very compact listing then creates a histogram for the msPlayed
entries in the jdata
dataframe and prepares the counter values for playback durations in a bar graph. This is done by the built-in R function hist()
in line 7, after line 5 has set the jdata
dataframe as a reference point and line 6 has set any PNG output files produced in the future to hist.png
. This ensures that R creates a PNG file of this name with the bar chart at the end of the script.
Hot Group
Would the data in the streaming history also allow conclusions to be drawn about preferences for a certain type of music, depending on the time of day? Listing 4 takes a stab at this and reads the JSON data, extracts the hour of the playback end time as a numeric value from the endTime
date stamp of each streaming event, and then determines which artist was played most often within that time window averaged over all streaming days.
Listing 4
hourly.r
01 #!/usr/bin/env Rscript 02 library("jsonlite") 03 jdata <- fromJSON("MyData/StreamingHistory0.json", simplifyDataFrame = TRUE) 04 # only enjoyed songs 05 jdata <- jdata[jdata$msPlayed > 60000, ] 06 d <- as.POSIXct(jdata$endTime, tz = "UTC") 07 jdata$hour <- as.numeric(format(d, tz="America/Los_Angeles", "%H")) 08 songs <- subset(jdata, , select=c(hour, artistName)) 09 agg <- aggregate(songs$hour, by=list(artistName=songs$artistName, hour=songs$hour), FUN=length) 10 winners <- agg[order(agg$hour, -agg$x),] 11 winners <- winners[!duplicated(winners[2]),] 12 winners
Figure 4 shows the original JSON data in the dataframe with all fields as found in the JSON file. Line 5 in Listing 4 discards all songs that have not been played back for at least one minute, to avoid introducing false positives into the statistics. Now the task is to extract the hour of the day from the Spotify timestamp; this is done after adjusting the time zone. Spotify denotes the times as UTC (i.e., GMT), but I listen to the music in the Pacific Time (PT) zone on the US West Coast. This explains why the as.POSIXct()
function reads the value as UTC from the JSON data, and the format
formatter in line 7 converts it to the America/Los_Angeles
zone. After doing this, the local time hour value determined with %H
is available as a string. However, to sort the entries later on, R needs numeric values, which is why as.numeric()
converts it to a number.
Now the dataframe is available in the jdata
variable, as you can see in Figure 5. Line 8 in Listing 4 then uses subset()
to convert the data into a dataframe with just two columns: the artist and the playback hour. R's built-in aggregate()
function then aggregates all lines for an artist with the same hourly value in line 9. FUN=length
specifies that the additional aggregation column contains the length (i.e., the number of artist-hour tuples).
Figure 6 shows an excerpt from this intermediate result. Based on this, ZZ Top was played exactly once at 19.00 hours, while no fewer than 11 entries with Linkin Park pop up at 20.00 hours. There are several different ways to filter out only the top performers from this view (e.g., Linkin Park at 20.00 hours). One method that relies entirely on R's standard functions is as follows: Sort the dataframe by hour (ascending) and the number of events (descending). Using deduplication, the algorithm then only keeps the first entry per each hour value and discards the rest. The top performers for each hour value remain.
Line 10 sorts the previously generated agg
dataframe, according to the order()
function specified in the square brackets. Its first parameter is the (positive) field name for the hour value; the second is the (negative) value for the counter determined by the length
function. In R, the newly created column by the aggregation function goes by the name of x
and contains the number of results in this case.
Line 11 runs a recode statement over the dataframe now named winners
and uses !duplicated(winners[2])
to specify that the second field (i.e., the hour value; R arrays always start with an index of 1, not 0) must be present once only in the result. Consequently, the function will only keep the previously forward sorted highest result for hour values with the associated artist and will discard all others.
That takes care of the list with the most popular bands, as a function of the time of day at Perlmeister Studios! Figure 7 shows the output of the hourly.r
R program (Listing 4). After midnight, it's either embarrassing oldies from the 1980s (Rainbow) or, as I vaguely recall, once from 1:00am to 4:00am, every single track by the band Sparks, after devouring a Netflix documentary featuring the band and proceeding to play back their songs for four hours. The next day at work was terrible, of course, but you only live once.
From experience, users can spend days tinkering with R to find the right data structure and methods that will often implement exactly what they want in just three lines. The reasons for this probably include the age of the language, which comes with a sort of anti-Python mindset ("Waddya mean, there's only one way to do this?"), and the many packages that have been released in an uncoordinated fashion over the decades since the original release. A Google search for a particular problem will often reveal three or four different approaches to solve the same issue. The training book by Robert I. Kabacoff [1] does a good job of explaining some basic procedures, but it is by no means an extensive reference.
More Secrets
If you rummage further in the Spotify dossier's ZIP file, you are likely to unearth a few more data treasures. For example, Spotify's Inferences.json
file contains ascertained facts about the user – presumably to help Spotify serve up appropriate ads that the listener will also respond to.
In my case, Spotify assumed I had a preference for "Light Beer" (Figure 8), which is a joke and totally wrong – as anyone who knows me can attest to if it came to a pinch! This is a likely explanation if Bud Light ads start popping up on my screen.
Infos
- Kabacoff, Robert I. R in Action, Second Edition. Manning, May 2015: https://www.manning.com/books/r-in-action-second-edition
« Previous 1 2
Buy this article as PDF
(incl. VAT)
Buy Linux Magazine
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters
Support Our Work
Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.
News
-
Gnome 48 Debuts New Audio Player
To date, the audio player found within the Gnome desktop has been meh at best, but with the upcoming release that all changes.
-
Plasma 6.3 Ready for Public Beta Testing
Plasma 6.3 will ship with KDE Gear 24.12.1 and KDE Frameworks 6.10, along with some new and exciting features.
-
Budgie 10.10 Scheduled for Q1 2025 with a Surprising Desktop Update
If Budgie is your desktop environment of choice, 2025 is going to be a great year for you.
-
Firefox 134 Offers Improvements for Linux Version
Fans of Linux and Firefox rejoice, as there's a new version available that includes some handy updates.
-
Serpent OS Arrives with a New Alpha Release
After months of silence, Ikey Doherty has released a new alpha for his Serpent OS.
-
HashiCorp Cofounder Unveils Ghostty, a Linux Terminal App
Ghostty is a new Linux terminal app that's fast, feature-rich, and offers a platform-native GUI while remaining cross-platform.
-
Fedora Asahi Remix 41 Available for Apple Silicon
If you have an Apple Silicon Mac and you're hoping to install Fedora, you're in luck because the latest release supports the M1 and M2 chips.
-
Systemd Fixes Bug While Facing New Challenger in GNU Shepherd
The systemd developers have fixed a really nasty bug amid the release of the new GNU Shepherd init system.
-
AlmaLinux 10.0 Beta Released
The AlmaLinux OS Foundation has announced the availability of AlmaLinux 10.0 Beta ("Purple Lion") for all supported devices with significant changes.
-
Gnome 47.2 Now Available
Gnome 47.2 is now available for general use but don't expect much in the way of newness, as this is all about improvements and bug fixes.