Tag Archives: bash

Keeping an eye on Glow Blogs⤴

from @ wwwd – John's World Wide Wall Display

screenshot of pi.johnj.info/gb

One of the things I am interested in as part of my work on Glow Blogs is what people are using Glow Blogs for.

Glow Blogs is made of of 33 different WordPress multi-sites. One for each Local Authority in Scotland and one central one.

The home page of each LA lists the last few posts. Visiting these pages will give you an idea of what is going on. In the past I’ve opened up each L.A. in a tab in my browser and gone through them. I had a script that would open them all up. I’ve now worked out an easy way to give a quick overview.

Recently I noticed shot-scraper ,Tools for taking automated screenshots of websites . I’ve used various automatic webpage screenshot pages in the past. These have usually been services that either charge money or have shut down. I used webkit2png a wee bit, but ran into now forgotten problems, perhaps around https?

shot-scraper can be automated and extended. It is a command line tool and using these is always an interesting struggle. I usually just follow any instructions blindly, searching any problems as I go. In this case it didn’t take tool long.

Once installed shot-scraper is pretty easy to use. shot-scraper https://johnjohnston.info Dumps an image johnjohnston-info.png

There are a lot of options, you can output jpegs rather than pngs. Run some javascript before taking a screenshot or wait for a while. you can even choose a section of the page to grab.

So I can use shot-scraper to create screenshots of each LA homepage. Then display them on a web page for a quick overview of Glow Blogs.


    #!/bin/bash

    cd /Users/john/Documents/scripts/glowscrape/img

    URLLIST="ab as ac an ce cl dd dg ea ed el er es fa fi gc glowblogs hi in mc my na nl or pk re sa sb sh sl st wd wl"
    for i in $URLLIST ;
    do
        /usr/local/bin/shot-scraper -s "#glow-latest-posts" -j "jQuery('.pea_cook_wrapper').hide()" --quality 80 https://blogs.glowscotland.org.uk/"$i" -o  "$i".jpg && continue
    done;
    

This first hides the cookie banner displayed by blogs and then screenshots the #glow-latest-posts section of the page only.

The script continues by copying the image over to my raspberry pi where they are shown on a web page

I hit a couple of problems along the way. The first was that the script stopped running when it could not find the #glow-latest-posts section. This happens on a couple of LAs who have no public blogs. adding && continue to the screenshot fixed that.

The second problem came when I wanted to run the script regularly. OSX schedules tasks with launchd. I’ve used Lingon X to schedule a few of these. Since I recently updated my system I first needed to get a new version of Lingon X. I then found that increased security gave me a few hoops to jump through to get the script to run.

I think it would have been simpler to do the whole job on a raspberry pi. But I was not sure if it would run shot-scraper. I’ll leave that for another day and a newer pi.

This is a pretty trivial use of a very powerful tool. I’ve now got a webpage that gives me a quick overview of what is going on in Glow Blogs and took another baby step in bash.

The first thing that surprised me was the lack of featured Images on the blog posts. These not only make the LA home pages took nicer they also make display blog posts on twitter more attractive.

A bit of twitter research⤴

from @ wwwd – John's World Wide Wall Display

graph of number twitter clients used by schools

I’ve talked to a fair number of teachers who find it easier to use twitter than to blog to share their classroom learning. I’ve been thinking a little of how to make that easier but got side tracked wondering how schools, teachers and classes use twitter.

If you use twitter on the web it tells you the application used to post the tweet. At the bottom of a tweet there is the date and the app that posted the tweet.

I’ve got a list that is made up of North Lanarkshire schools I started when I was supporting ICT in the authority.

I could go down the list and count the methods but I though there might be a better way. I recalled having a played with the twitter api a wee bit so searched for and found: GET lists/statuses — Twitter Developers. I was hoping ther was some sort of console to use, but could not find one, a wee bit more searching found how to authenticate to the api using a token and how to generate that token. Using bearer tokens

It then didn’t take too long to work out how to pull in a pile of status updates from the list using the terminal:

curl --location --request GET 'https://api.twitter.com/1.1/lists/statuses.json?list_id=229235515&count=200&max_id=1225829860699930600' --header 'Authorization: Bearer BearerTokenGoesHere'

This gave me a pile of tweets in json format. I had a vague recollection that google sheets could parse json so gave that a go. I had to upload the json somewhere I could import it into a sheet. This felt somewhat clunky. I did see some indications that I could use a script to grab the json in sheets, but though it might be simpler to do it all on my mac. More searching, but I fairly quickly came up with this:

curl --location --request GET 'https://api.twitter.com/1.1/lists/statuses.json?list_id=229235515&count=200&' --header 'Authorization: Bearer BearerTokenGoesHere' | jq '.[].source' | sed -e 's/<[^>]*>//g' | sort -bnr | uniq -c | sort -bnr

This does the following:

  1. download the status in json format
  2. passes it to the jq application (which I had installed in the past) which pulls out a list of the sources.
  3. It is then passed to sed which strips the html tags leaving the text. (I just search for this, I have no idea how works)
  4. next the list is sorted
  5. then uniq pulls out the uniq entries and counts then
  6. Finally sorts the counts and gave:
119 "Twitter for iPhone"
  28 "Twitter for Android"
  22 "Twitter Web App"
   8 "Twitter for iPad"
   1 "Twitter Web Client"

This surprised me. I use my school iPad to post to twitter and sort of expected iPads to be highest or at least higher.

It maybe that the results are skewed by the Monday, Tuesday holiday and 2 inservice days, so I’ll run this a few times next week and see. You can also use a max_id parameter so I could gather more than 200 (less retweeted content) tweets.

This does give me the idea that it might be worth explaining how to make posting to Glow Blogs simpler using a phone.

2018 flickrs by⤴

from @ wwwd – John's World Wide Wall Display

I liked the Pummelvision service so when it went I sort of
made my own. Which lead to this:
Flickr 2014 and DIY pummelvision and 2016 Flickring by.

I went a little early this year:

I’ve updated the script (gist) to handle a couple of new problems.

  1. Some of my iphone photos were upside down in the video as ffmpeg doesn’t see the Rotation EXIF. I installed jhead via homebrew deal with this.
  2. I installed sox to duplicate the background track as I took more photos and slowed them down a bit this year.

I have great fun with this every time I try it, I quite like the results but the tinkering with the script is the fun bit. I sure it could be made a lot more elegant but it works for me.

2016 Flickring by⤴

from @ John's World Wide Wall Display

montage-flickr-2016

A couple of years ago I made a video of all my flickr videos in the style of the now dead pummelvision service.

I dug out the script tidied it up a little, and made the above video with my 2016 photos.

I uploaded the script in the unlikely event that someone else would want to do something like this. It is not a thing of beauty, I am well out of my depth and just type and test. The script need ffmpeg on your computer (I’d guess mac only as it used sips to resize images) and a Flickr API key.

The script also leave you with up to 500 images in a folder. Before I deleted them I made a montage and averaged them using imageMagick

montage -mode concatenate -tile 25x *.jpg out.jpg which is the featured image on this post.

and

convert *.jpg -average aver.jpeg

aver

I guess all that the average proves is that most of my photos are landscapes, given the hit of a sky…

    Twitter Listed⤴

    from @ John's World Wide Wall Display

    twitter-lists-resized

    Another interesting idea from Alan. I read his post: Measurement or [indirect] Indicators of Reputation? A Twitter List / Docker / iPython Notebook Journey and then Amy’s List Lurking, As Inspired by Alan Levine.

    The idea is that you can find out something about a person/yourself by the twitter lists they are listed in.

    Alan went down a nice rabbit hole involving Docker & iPython. This seemed as if it might be a mite tricky. I think I’ve messed up my mac’s python setup by trying to get iPython Notebooks working before. Alan’s approach is a lot more sensible, I hope to re-visit it later. In the meantime I though I would try out something a little simpler. This approach is simple sorting and manipulating a text files. Mostly with, in my case, TextMate’s sorting and a bit of bash in the terminal.

    So:

    1. I went to the list on twitter and copied all of the text on the page.
    2. Pasted that into a text document
    3. Manually cleaned up the bits above and below the list (a couple of selections and backspace)This produced a list that repeated the following pattern:
      • Name of list by Name of lister
      • Subtitle/description of list, sometime not there
      • Number of Members
    4. I sorted the list. This grouped all of the lines with number Members together, a couple of lists that started with or a number above.
    5. Select all the member lines and delete
    6. there were a lot of lines Visit http://twibes.com/education/twitter-list to join the top education Twitter people as a description so easy to delete them too.
    7. I saved this file as a file list1.txt
    8. What I was looking for was the lines that were lists names not descriptions, and I wanted the lists rather than the names of the people who made the lists. So I made the lists into two columns by replacing by with a TAB and saved the file.
    9. We then sort the list by the second column using the terminal sort -k 2 -t $'t' list1.txt > list2.txt 1 As the second column is empty those lines float to the top and can easily be deleted.
    10. Next we cut the first column out which gives me a list of the list names: cut -f 1 list2.txt | sort > list3.txt

    So I now have a list of the the twitter lists I am a member of. I can use that in wordle.net to get a word cloud. I made a few, removing the most popular words to see the others in more relief. I’ve tied them together in a gif at the top of this post.

    Amy’s approach was to look for interesting list name, here are some of my favourites (I’ve added descriptions when they are there):

    • awesome rasbperrypi peopl
    • audiophiles
    • Botmakers: Blessed are the #botALLIES
    • Digital cool cats: Digital humanities/learning tech/cool stuff peeps
    • People I met through DS106
    • not to be messed with
    • Coolest UK Podcasters
    • Very funky Ed Blogs

    Of course these are not the most numerically but they are, to me, the most flattering;-)

    On this 10th birthday of twitter you might enjoy a quick browse through the name of the lists you are a member of.

    1. sort -k 2 -t $'t' list1.txt > list2.txt THis sorts by the second column, k, key and uses a tab, $’t’ to separate the columns

    Flickr 2014 and DIY pummelvision⤴

    from @ John's World Wide Wall Display

    A few years back I used pummelvision to make a video of all of my flickr photos. Pummelvision was an online service where you pointed it to your flickr stream and it built a video for you and posted to vimeo. It could also take input from tumblr and facebook.

    I though it might be interesting to make a similar video for my photos this year. However going to look for pummelvision drew a blank, the company had closed. I then though It might be interesting to try to create a similar video. From my memory and looking at my old video, pummelvision made a video with no transitions and very fast. As far as I remember it just used one tune. I downloaded my old video from vimeo and extracted the audio file using QuickTime pro. I took a guess that the frame rate was about 6 photos per second.

    Grabbing the images

    I guess there a few ways to grab all your photos from flickr, but this is how I did it. If I was doing it frequently I’d look into automating it, but this was a once off, or once a year if I do it again.

    Flickr’s api would allow you to do this, but it seemed a bit excessive to try and write a pile of code. The Flickr API has a section to test all of the command so I headed to: Flickr Api Explorer – flickr.photos.search. There I put my own user ID in, set the min_date_taken and max_date_taken, increased the per_page to 500 and added the large photo url to the extras field.

    This produced an xml file will all the information:

    Flickr xml

    I then extracted the 397 urls from the text. There will be many ways to do this, but I am experimenting with the Sublime Text application at the moment, it found & selected all of the https: occurrences and the with cmd-shift-right arrow expanded the selection to the quotes. One copy got all of the files!

    Once I had a list of urls I edit those so that each line was:

    curl "https://farm4.staticflickr.com/3897/14598292323_ae6462fa07_b.jpg" > image_183.jpg

    With the numbers out the image sequential and padded to 3 characters, eg image_001.jpg, image 002.jpg etc. I also numbered then in reverse so the oldest photo would be first.

    I saved this text as a file, dl.sh and moved into the terminal:

    cd path/to/thefolder
    chmod +x dl.sh
    ./dl.sh

    This code set the dl.sh file to executable and then ran it, the terminal filled with text and the folder with images. Curl is the command-line tool for downloading files.

    Sizing images

    Downloading the large size gave a folder full of images but some where landscape and some portrait, ie 1024 × 768 or 768 x1024 I need the images to all be the same size. So i used the sips utility to first resize them, sips --resampleHeight 768 *.jpg, then to pad the portrait ones: sips --padToHeightWidth 786 1024 *.jpg

    Which gives me pictures like this for the portrait ones:

    Img 076 toad

    Making the movie

    I discounted using iMove, moviemaker or the like as I wanted something quick (not necessary quick this time…) and that could be automated. I am also not sure in iMovie can show as fast as 6 per second. (Update, a quick look shows iMove can set speed to fractions of a second per frame.)

    I though of a couple of ways to make the move, using Quicktime pro or ffmpeg. Quicktime pro proved the easiest option, opened the app and File -> Open Image sequence…, choose 6 frames per second, then all I had to do was save the movie.

    Unfortunately Quicktime pro has been replace by Quicktime and it is a bit of a bother to get your old QT pro working if you had paid for a license. So I though I’d figure out ffmpeg too.

    With great power comes great complexity

    FFmpeg is A complete, cross-platform solution to record, convert and stream audio and video. It is a command line application and has a lot of variables. I can usually find out the right command with a bit of google. This one took quite a lot of google and failures. Most of these failures came from me trying to set a framerate, which lead to skipped frames. Eventually I dropped the idea of using the framerate options and got a very (too) fast video with this:

    ffmpeg -f image2 -i IMG_%03d.jpg -c:v libx264 -pix_fmt yuv420p out.mp4

    Note to self, in the -i, iput option IMG_%03d.jpg means all the images with 3 numerals, eg 001, 002… 375

    I then slowed it down a little with this:

    ffmpeg -i out.mp4 -filter:v "setpts=4.0*PTS" 2014-flickr-show.mp4

    And added the audio:

    ffmpeg -i 2014-flickr-show.mp4 -i pum.mp3 -map 0 -map 1 -codec copy -codec:a aac -strict experimental -b:a 192k -shortest 2014-flickr-show-audio.mp4

    It took a fair bit of google to get the audio right too, the -codec:a option seems to sort things out.

    Whys and Wherefores

    As noted above, I could have done most of this with iMovie. But by using ffmpeg or QT pro, I’ve the opportunity to play, learn and possibly end up with an automated system. It would seem well within the realms of possibility to have a script that used the flickr api to download a bunch of images, perhaps for a year or with a tag and make a movie from them.

    I’ve now figured out how to do most of this by piecing together the above fragments and finding out a bit about loops and renaming files, but I’ve no idea of how to create a bash script that will replace my hard coded tags, usernames ect with input, more to learn.

    Once you have a lot of jpgs

    You might as well do other things with them: Flickr 2014