Chris Essig

Walkthroughs, tips and tricks from a data journalist in eastern Iowa

Archive for the ‘Courier’ Category

Create an easy before, after photo interactive

with one comment

before_after_teaserNote: This is cross-posted from Lee’s data journalism blog, which you can read over there by clicking here.

When we got our first big snow storm of the winter season a few months ago, we tried something a little bit different at the Courier.

Before the storm hit, we asked our Facebook fans to capture before and after photos of the outdoors taken in the same, identical spot. Then we had them submit their photos to us. With those photos, we put together this before/after interactive that was a nice little traffic boost for us and complimented our weather coverage well.

We have done something similar to this in the past: This interactive, for instance, looks at urban sprawl in the Cedar Valley. It uses a slider that the reader needs to click and drag to toggle between the photos.

The slider works very well. However, I loved what the Detroit Free Press did recently when they compared before and after photos of a car manufacturing plant in their area (which Marisa Kwiatkowski wrote about in Lee’s data journalism blog). Instead of using a slider, the Free Press had the photos change based on mouse movement. The result is incredibly smooth and loads quickly.

After digging through their code, I found out it is very easy to do. Here’s how we did it at the Courier.

1. Set up a basic HTML page

<!DOCTYPE html>
<title>Before/after demo</title>

That’s about as basic as it gets right there.

2. Add your images to the “body” of the HTML page

<div style="position:relative; width:600px; height:460px;" class="trackMe">
    <img src="" class="beforeafterphotos" />
    <img src="" class="beforeafterphotos" />

You will want to delete the URLs for our images and replace them with your own. Make sure the images are the same size. You’ll also want to make sure the before and after photos are as similar as they can be to each other. I spent quite a bit of time editing, cropping and rotating the photos users sent to us to have them both line up as best they could.

It is also important to note that the “after” image (beforeafter05b.jpg in the example above) is placed on the page before the “before” image is (beforeafter05a.jpg in the example above).

Lastly, it is very important that you wrap the images in a DIV with the class “trackMe.” And each image needs to be put in the class “beforeafterphotos.”

3. Add this Javascript code in the “head” of the HTML page

<script src="" type="text/javascript"></script>
<script type="text/javascript" language="JavaScript">
$(document).ready(function() {
		$(this).children("img:last").mousemove(function(e) {
			var offset = $(this).offset();
			var xpos = (e.pageX - offset.left);
   			var ypos = (e.pageY -;
			//now to get the first child image width..
			var thisImage = $(this);
			var thisWidth = thisImage.width();
			var pct = Math.round((xpos/thisWidth)*100)/100;
			var ipct = Math.abs(Math.round(((xpos-thisWidth)/thisWidth)*100)/100);
			thisImage.css({ 'opacity' : ipct });

This code basically detects mouse movement over the second of the two images, which is actually the “before” image (see above). It then figures out where the mouse is in relation to the image. Then it sets the opacity of that image based on the mouse’s location.

So if the mouse is to the left of the image, the “before” image’s opacity is set to 100 percent and is fully visible. If the mouse is in the middle of the image, the image’s opacity is set to 50 percent and is half way visible. If it is to the right, the image’s opacity is set to 0 percent and is invisible.

This function is called every time the mouse is moved. The effect for the reader is as they move their mouse from the left side of the image to the right, the “before” image slowly fades out and reveals the “after” image.

4. Add this CSS code to the “head” of the HTML page

.trackMe img.beforeafterphotos {
  top:0 !important;
	left:0 !important;
	margin:0 0 15px 0 !important;

This code just makes sure the images are layered on top of one another, instead of one above the other.

5. Full code is below:

<!DOCTYPE html>
<title>Before/after demo</title>
.trackMe img.beforeafterphotos {
  top:0 !important;
	left:0 !important;
	margin:0 0 15px 0 !important;
<script src="" type="text/javascript"></script>
<script type="text/javascript" language="JavaScript">
$(document).ready(function() {
		$(this).children("img:last").mousemove(function(e) {
			var offset = $(this).offset();
			var xpos = (e.pageX - offset.left);
   			var ypos = (e.pageY -;
			//now to get the first child image width..
			var thisImage = $(this);
			var thisWidth = thisImage.width();
			var pct = Math.round((xpos/thisWidth)*100)/100;
			var ipct = Math.abs(Math.round(((xpos-thisWidth)/thisWidth)*100)/100);
			thisImage.css({ 'opacity' : ipct });
<div style="position:relative; width:600px; height:460px;" class="trackMe">
	<img src="" class="beforeafterphotos" />
	<img src="" class="beforeafterphotos" />

That’s all it takes!

6. Add more images

If you want to add more images, just repeat step 2 with the new images. For the complete code that I used on the snow before/after project, click here.

Written by csessig

February 25, 2013 at 3:09 pm

How We Did It: Waterloo crime map

with 3 comments

Note: This is cross-posted from Lee’s data journalism blog. Reporters at Lee newspapers can read my blog over there by clicking here.

Last week we launched a new feature on the Courier’s website: A crime map for the city of Waterloo that will be updated daily Monday through Friday.

The map uses data provided by the Waterloo police department. It’s presented in a way to allow readers to make their own stories out of the data.

(Note: The full code for this project is available here.)

Here’s a quick run-through of what we did to get the map up and running:

1. Turning a PDF into manageable data

The hardest part of this project was the first step: Turning a PDF into something usable. Every morning, the Waterloo police department updates their calls for service PDF with the latest service calls. It’s a rolling PDF that keeps track of about a week of calls.

The first step I took was turning the PDF into a HTML document using the command line tool PDFtoHTMLFor Mac users, you can download it by going to the command line and typing in “brew install pdftohtml.” Then run “pdftohtml -c (ENTER NAME OF PDF HERE)” to turn the PDF into an HTML document.

The PDF we are converting is basically a spreadsheet. Each cell of the spreadsheet is turned into a DIV with PDFtoHTML. Each page of the PDF is turned into its own HTML document. We will then scrape these HTML documents using the programming language Python, which I have blogged about before. The Python library that will allow us to scrape the information is Beautiful Soup.

The “-c” command adds a bunch of inline CSS properties to these DIVs based on where they are on the page. These inline properties are important because they help us get the information off the spreadsheet we want.

All dates and times, for instance, are located in the second column. As a result, all the dates and times have the exact same inline left CSS property of “107” because they are all the same distance from the left side of the page.

The same goes for the dispositions. They are in the fifth column and are farther from the left side of the page so they have an inline left CSS property of “677.”

We use these properties to find the columns of information we want. The first thing we want is the dates. With our Python scraper, we’ll grab all the data in the second column, which is all the DIVs that have an inline left CSS property of “107.”

We then have a second argument that uses regular expressions to make sure the data is in the correct format i.e. numbers and not letters. We do this to make sure we are pulling dates and not text accidently.

The second argument is basically an insurance policy. Everything we pull with the CSS property of “107” should be a date. But we want to be 100% so we’ll make sure it’s integers and not a string with regular expressions.

The third column is the reported crimes. But in our converted HTML document, crimes are actually located in the DIV previous to the date + time DIV. So once we have grabbed a date + time DIV with our Python scraper, we will check the previous DIV to see if it matches one of the seven crimes we are going to map. For this project, we decided not to map minor reports like business checks and traffic stops. Instead we are mapping the seven most serious reports.

If it is one of our seven crimes, we will run one final check to make sure it’s not a cancelled call, an unfounded call, etc. We do this by checking the disposition DIVs (column five in the spreadsheet), which are located before the crime DIVs. Also remember that all these have an inline left CSS property of “677”.

So we check these DIVs with our dispositions to make sure they don’t contain words like “NOT NEEDED” or “NO REPORT” or “CALL CANCELLED.”

Once we know it’s a crime that fits into one of our seven categories and it wasn’t a cancelled call, we add the crime, the date, the time, the disposition and the location to a CSV spreadsheet.

The full Python scraper is available here.

2. Using Google to get latitude, longitude and JSON

The mapping service I used was Leaflet, as opposed to Google Maps. But we will need to geocode our addresses to get latitude and longitude information for each point to use with Leaflet. We also need to convert our spreadsheet into a Javascript object file, also known as a JSON file.

Fortunately that is an easy and quick process thanks to two gadgets available to us using Google Docs.

The first thing we need to do is upload our CSV to Google Docs. Then we can use this gadget to get latitude and longitude points for each address. Then we can use this gadget to get the JSON file we will use with the map.

3. Powering the map with Leaflet, jQRangeSlider, DataTables and Bootstrap

As I mentioned, Leaflet powers the map. It uses the latitude and longitude points from the JSON file to map our crimes.

For this map, I created my own icons. I used a free image editor known as Seashore, which is a fantastic program for those who are too cheap to shell out the dough for Adobe’s Photoshop.

The date range slider below the map is a very awesome tool called jQRangeSlider. Basically every time the date range is moved, a Javascript function is called that will go through the JSON file and see if the crimes are between those two dates.

This Javascript function also checks to see if the crime has been selected by the user. Notice on the map the check boxes next to each crime logo under “Types of Crimes.”

If the crime is both between the dates on the slider and checked by the users, it is mapped.

While this is going on, an HTML table of this information is being created below the map. We use another awesome tool called DataTables to make that table of crimes interactive. With it, readers can display up to a 100 records on the page or search through the records.

Finally, we create a pretty basic bar chart using the Progress Bars made available by Bootstrap, an awesome interface released by the people who brought us Twitter.

Creating these bars are easy: We just need to create DIVs and give them a certain class so Bootstrap knows how to style them. We create a bar for each crime that is automatically updated when we tweak the map

For more information on progress bars, check out the documentation from Bootstrap. I also want to thank the app team at the Chicago Tribune for providing the inspiration behind the bar chart with their 2012 primary election app.

The full Javascript file is available here.

4. Daily upkeep

This map is not updated automatically so every day, Monday through Friday, I will be adding new crimes to our map.

Fortunately, this only takes about 5-10 minutes of work. Basically I scrape the last few pages of the police’s crime log PDF, pull out the crimes that are new, pull them into Google Docs, get the latitude and longitude information, output the JSON file and put that new file into our FTP server.

Trust me, it doesn’t take nearly as long as it sounds to do.

5. What’s next?

Besides minor tweaks and possible design improvements, I have two main goals for this project in the future:

A. Create a crime map for Cedar Falls – Cedar Falls is Waterloo’s sister city and like the Waterloo police department, the Cedar Falls police department keeps a daily log of calls for service. They also post PDFs, so I’m hoping the process of pulling out the data won’t be drastically different that what I did for the Waterloo map.

B. Create a mobile version for both crime maps – Maps don’t work tremendously well on the mobile phone. So I’d like to develop some sort of alternative for mobile users. Fortunately, we have all the data. We just need to figure out how to display it best for smartphones.

Have any questions? Feel free to e-mail me at

Spreadsheet tutorials to help journalists clean and format data

leave a comment »

Note: This is cross-posted from Lee’s data journalism blog. Reporters at Lee newspapers can read my blog over there by clicking here.

I was talking to colleague the other day who was interested in getting into computer programming and more data projects. He asked where the best place to start was.

My gut reaction was to tell him to learn the basics of spreadsheets. Almost all of the data I have used with projects — whether they end up being a map or a graphic — were initially set up in spreadsheet form.

Most of us are familiar with spreadsheets and have likely worked with Microsoft Excel. Agencies both big and small often gather spreadsheets of useful information for us to use with our stories.

The problem is most of the time the data isn’t formatted right or contains inaccuracies like misspellings. That is where the magic of spreadsheet formulas can come in to help organize your data.

Here’s a few resources you might find handy for working with spreadsheets.

1. Poynter: How journalists can use Excel to organize data for stories

This walkthrough is great because it is written directly for journalists. It is also intended for beginners so those with no spreadsheet knowledge will be able to keep up. Finally, the walkthrough is intended for both print and online journalists. So journalists who have no intention of making visualizations will still find a handful of features in Excel to help them with their day-to-day reporting.

2. My Favorite (Excel) Things

This PDF provided by Mary Jo Webster at the St. Paul Pioneer Press is a great, concise list of Excel formulas she uses all the time. It includes formulas on how to format dates, run sums and even run if-else statements in Excel. It’s one of my favorite resources for Excel and definitely worth bookmarking.

3. Google Docs – Spreadsheet resources

Not a fan of Excel or don’t have it installed on your work computer? Fortunately you can make a spreadsheet with Google by logging into Google Drive and clicking “Create > Spreadsheet.” The best part is spreadsheets you create with Google can be accessed from any computer as long as you log in with your Google account.

Here’s some resources for getting started with Google spreadsheets:

4. Google Refine resources

Sometimes you need more than just spreadsheet formulas to clean dirty data. That’s where the powerful Google Refine program can come into play. The program was designed to clean dirty data by finding inconsistencies in your spreadsheets. It can also help you sort data, add to data, transform it from one service to another and much, much more.

Here’s some resources you might find handy:

 5. NICAR-L e-mail list

Still stuck? Fortunately, there is a wonderful community of computer-assisted reporters who are more than willing to help others out. If you want great information on spreadsheets or any other data journalism topic, check out the National Institute of Computer-Assisted Reporting’s email list. Questions on Excel come up almost every day.

Have any other useful resources not listed here? E-mail me at

Written by csessig

June 25, 2012 at 9:37 am

Resources for moving away from Google maps

leave a comment »

 This is cross-posted from Lee’s data journalism blog. Reporters at Lee newspapers can read my blog over there by clicking here.

For some time now, Google maps has been the go-to source for map-making journalists.

And for good reason. The maps load quickly, can be easily hooked up to other Google products like Fusion Tables and charts, are free (mostly) and feature an incredible API for developers.

But recently, many journalists have been moving away from Google Maps. Part of this was spurred by a recent announcement by Google that they would be charging for their map API (although they backtracked a bit after receiving flack from news organizations).

Others are just sick of seeing the same backdrops on every map featured on news websites. A little variety never hurt anybody.

Mostly because of the latter, I’ve started experimenting with products like Leaflet, Tilemill and Wax to make our interactive maps. Here’s a couple of examples:

1. Public hunting grounds in Iowa

2. Child abuse reports by county

If you are like me and trying to spice things up, here’s some resources that helped me that will hopefully help you as well:

Leaflet docs – Leaflet is very similar to Google Maps; it provides your map with a backdrop, highlights locations, gives you zoom controls, etc. But unlike Google Maps, Leaflet allows users to make their own backdrops and upload them online. Other users can use the backdrops on their own maps. Leaflet also allows map-makers to put points on the map similar to Google.

CloudMade Map Style Editor – Here’s a collection of user-generated Leaflet maps you can use with your next project. Be careful not to get sucked in for hours like I do.

AP: Bringing Maps to Fruition – If you are looking for a good example of Leaflet in action with source code and all, download this ZIP file provided by Michelle Minkoff, Interactive Producer for the Associated Press in Washington. She presented it at NICAR 2012 and inspired me to try it out.

TileMill – Not satisfied with any of the backdrops you’ve seen? Why not create your own? With TileMill, that is not only possible but easy (if you know basic CSS) and ridiculously addicting. You can also use TileMill, for instance, to color code counties using tiles, which are basically very small PNG image files. You can then use Google Maps as the backdrop (See below) for the map.

Chicago Tribune: Making maps (five part series) – It’s fair to say this is required reading for any journalist looking to make maps. This thorough read goes through the steps of preparing your data, putting it in a database, using the tile rendering program TileMill to make a great-looking county population map and deploy it with Google Maps. Warning: It’s best if you familiarize yourself with the command line and if you a Mac user, DOWNLOAD ALL THE DEPENDENCIES (you can) with HomeBrew. See “Requirements” on the second part of the tutorial for a list of dependencies.

Minn Post: How we built the same-day registration map – After you are done styling your map in the above tutorial, the Trib guys will show you how to deploy your tiles with a Python script. This is great but not how I have done it. Instead, I used this fantastic walkthrough from the Post’s Kevin Schaul to render the tiles, post them with Leaflet and make them interactive with Wax.

Geocoding with Google Spreadsheets – One downfall of TileMill is it doesn’t recognize addresses like Google Maps does. Use this site to find out the latitude and longitude of addresses. Warning: You can only do 99 addresses at a time. If you know of a better resource, I’d love to know about it.

Leaflet recipe: Hover events on features and polygons – I have not gone through this walk-through from Ben Walsh at the L.A. Times, but it looks like a solid way to incorporate hover events with your Leaflet maps.

NRGIS Library for Iowa – Journalists at other Iowa newspapers will be happy to know this website exists with plenty of shapefiles of area roads, lakes, interstates, parks, you name it. It’s what I used to style all the points on the hunting grounds map. And if you don’t live in Iowa, just ask around and you may hit the jackpot like I did!

Maki Icons – Tilemill also supports icons. Here’s a great list of sharp looking icons you can use with the program.

QGIS 1 and QGIS 2 – If you are looking to make static, interactive maps and don’t need a zoom function, check out the (free!) QGIS program. With it, you can import polygons (with shapefiles) or points on map (using CSVs) and style them. You can then export them as PNGs files and make them interactive. Check out this two-part tutorial for more information.

As always, feel free to e-mail me with questions or if you know of any great resources not listed:

Written by csessig

April 16, 2012 at 9:51 am

Posted in Courier, Data, Leaflet, Maps, TileMill, Wax

Turning Blox assets into timelines: Last Part

with 2 comments

Note: This is cross-posted from Lee’s data journalism blog. Reporters at Lee newspapers can read my blog over there by clicking here.

Also note: You will need to run on the Blox CMS for this to work. That said you could probably learn a thing or two about web-scraping even if you don’t use Blox.

For part one of this tutorial, click here. For part two, click here


If you’ve followed along thus far, you’ll see we’re almost done turning a collection of Blox assets — which can include articles, photos and videos — into  timelines using a tool made available by ProPublica called TimelineSetter.

Right now, we are in the process of creating a CSV file, which we will use with the program TimelineSetter. This programs takes the CSV file and creates a nice looking timeline out of it.

The CSV file is created by using a Python script that scrapes a web page we created with all of our content on it. The full code for this script is available here.

In the second part of the series, we went ahead and created a Python scraper that will pull all the information on the page we need.

1. If you’ve gone through the second part, you can go ahead and run python now on your command line (More information on how to run the scraper is available in the first part of the blog).

You’ll notice the script will output a CSV that has all the information we need. But some of it is ugly. We need to delete certain tags, for instance, that appear before and after some of the text.

But before we can run Python commands to delete these unnecessary tags, we need to convert all of the text into strings. This is basically just text. Integers, by contrast, are numbers.

So let’s do that first:

# Extract that information in strings
    date2 = str(date)
    link2 = str(link)
    headline2 = str(headline)
    image2 = str(image)

    # These are pulled from outside the loop
    description2 = str(description)

Note: This code and the following chunks should go inside the loop statement we created in the second part of the series. This is because we want these changes to take effect on every item we scrape. If you are wondering, I tried to explain loop statements in my second blog. Your best bet though is to ask Google about loop statements.

2. Now that we’ve converted everything into strings, we can get rid of the ugly tags. You’ll notice that the descriptions of our stories start and end with with a “p” tag. Similarly, all dates start and end with an “em” tag.

Because of this, I added a few lines to the Python script. These lines will replace the ugly tags with nothing…effectively deleting them before they are put into the CSV file:

# Extra formatting needed for dates to get rid of em tags and unnecessary formatting
    date4 = date3.replace('[<em>', "")
    date5 = date4.replace('</em>]', "")
    date6 = date5.replace('- ', "")
    date7 = date6.replace("at ", "")

    # Extra formatting is also need for the description to get rid of p tags and new line returns
    description4 = description3.replace('[<p>', "")
    description5 = description4.replace('</p>]', "")
    description6 = description5.replace('\n', " ")
    description7 = description6.replace('[]', "")

3. For images, the script spits out “\d\d\d” for the width property. We will change this so it’s 300px, making all images on the page 300px wide. Also, we will delete the text “None,” which shows up if an image doesn’t exist for a particular asset.

# We will adjust the width of all images to 300 pixels. Also, Python spits out the word 'None' if it doesn't find an image. Delete that.
    image4 = re.sub(r'width="\d\d\d"', 'width="300"', image3)
    image5 = image4.replace('None', "")

4. If you are at all familiar with the way articles work in Blox, you know that when you update them, red text shows up next to the headline telling you when the story was last updated. The code that formats that text and makes it red is useless to us now. So we will delete this and replace it with the current time and date using the Python datetime module.

To use the datetime module, we have to first import it at the top of our Python script. We then need to call the object within the module that we want. A good introduction to Python modules is available here.

The module we want is called “”. As the name suggests, it returns the current date and time. I then put it in a variable called “now”, which makes it more reader friendly to use later in the script.

So the top of our page should look like this:

import urllib2
from BeautifulSoup import BeautifulSoup
import datetime
import re

now =

Inside the loop we call the “” object and replace the text “Updated” with the current date and time:

# If the story has been updated recently, an em class tag will appear on the page showing the time but not the date. We will delete the class and         replace it with today's date. We can change the date in the CSV if we need to.
    date8 = date7.replace('[<em class="item-updated badge">Updated:', str(now.strftime("%Y-%m-%d %H:%M")))

5. Now there is just one last bit of cleaning up we will need to do. For those who don’t know, CSV stands for comma-separated values. This means that basically the columns we are creating in our spreadsheet are separated by commas. This is the preferred type of spreadsheet for most programs because it’s simple.

We can run into problems, however, if some of our data includes commas. So these next few lines in our script will replace all of the commas we scraped with dashes. You can change it to whatever character or characters you want:

# We'll replace commas with dashes so we don't screw up the CSV. You can change the dash to whatever character you want
    date3 = date2.replace(",", " -")
    link3 = link2.replace(",", " -")
    headline3 = headline2.replace(",", " -")
    description3 = description2.replace(",", " -")
    image3 = image2.replace(",", " -")

If you want to put the commas back into the timeline, you can do so after the final HTML file is created (after you run python Typically I’ll replace all the commas with “////” and then do a simple find and replace on the final HTML file with a text editor and change every instance of “////” back into a comma.

Now we have clean, concise data! We will now put this into the CSV file using the “write” command. Again, all these commands are put inside the loop statement we created in the second part of the series, so every image, description, date, etc. we scrape will be cleaned up and ready to go:

# Write the information to the file. The HTML code is based on coding recognized by TimelineSetter
    f.write(date8 + "," + description7 + "," + link3 + "," + '<h2 class="timeline-img-hed">' + headline3 + '</h2>' + image5 + "\n")

The headlines, you’ll notice, are put into an HTML tag “h2” tag. This will bold and increase the size of the headlines so they stick out when a reader clicks on a particular event in the timeline.

That’s it for the loop statement. Now we will add a simple line outside of the loop statement that closes the CSV file when all the loops we want run are done:

#You're done! Close file.

And there you have it folks. We are now done with our Python script. We can now run it (see part 1 of the series) and have a nice, clean looking CSV file we can turn into a timeline.

If you have any questions, PLEASE don’t hesitate to e-mail or Tweet me. I’d be more than happy to help.

Written by csessig

March 16, 2012 at 8:38 am

Multiple layers and rollover effects for Fusion Table maps

with 3 comments

Note: Because of a change in the Fusion Tables API, the method for using rollover effects no longer works.

Note: This is cross-posted from Lee’s data journalism blog. Reporters at Lee newspapers can read my blog over there by clicking here.

If you haven’t noticed by now, a lot of journalists are in love with Google Fusion Tables. I’m one of them. It’s helped me put together a ton of handy maps on deadline with little programming needed.

For those getting started, I suggest these two walkthroughs on Poynter. For those who have some experience with FT, here are a couple of options that may help you spruce up your next map.

Multiple Fusion tables on one map

Hypothetically speaking, let’s say you have two tables with information: one has county data and the other city data. And you want to display that data on just one map.

Fusion Tables makes it very easy to display both at the same time. We’ll start from the top and create a simple Javascript function to display our map (via the Google Maps API):

function initialize() {
	map = new google.maps.Map(document.getElementById('map_canvas'), {
	    center: new google.maps.LatLng(42.5, -92.2),
		zoom: 10,
		minZoom: 8,
		maxZoom: 15,
	    mapTypeId: google.maps.MapTypeId.TERRAIN

At the end of the initialize function we call a function “loadmap(); With this function, we will actually pull in our Fusion Tables layers. For this example we’ll bring in two layers instead of one. Notice how strikingly similar the two are:

function loadmap() {
	layer2 = new google.maps.FusionTablesLayer({
		query: {
			select: 'geometry',
			from: 2814002

	layer = new google.maps.FusionTablesLayer({
		query: {
			select: 'Mappable_location',
			from: 2813443

That’s it! You now have one map pulling in two sets of data. To see the full code for this map, click here.

Rollover effects for Fusion Tables

One feature often requested in the Fusion Tables forums is to enable mouse rollover events for Fusion Table layers. Typically, readers who look at a map have to click on a point, polygon, etc. to open up new data about that point, polygon, etc. A mouseover event would allow new data to pop up if a reader hovers over a point, polygon, etc. with their mouse.

A few months ago, some very smart person rolled out a “workable solution” for the rollover request. Here’s the documentation and here’s an example of it in the wild.

Another example is this map on poverty rates in Iowa. The code below is from this map and is very similar to the code on the documentation page:

		select: "'Number', 'Tract', 'County', 'Population for whom poverty status is determined - Total', 'Population for whom poverty status is determined - Below poverty level', 'Population for whom poverty status is determined - Percent below poverty level', 'One race - White', 'One race - Black', 'Other', 'Two or more races'", // list of columns to query, typially need only one column.
		from: 2415095, // fusion table name
		geometryColumn: 'geometry', // geometry column name
		suppressMapTips: true, // optional, whether to show map tips. default false
		delay: 1, // milliseconds mouse pause before send a server query. default 300.
		tolerance: 6 // tolerance in pixel around mouse. default is 6.

	//here's the pseudo-hover
	google.maps.event.addListener(layer, 'mouseover', function(fEvent) {
var NumVal = fEvent.row['Number'].value;
		styles: [{
			where: "'Number' = " + NumVal,
			polygonOptions: {
				fillColor: "#4D4D4D",
				fillOpacity: 0.6

Note: It’s easiest to think of your Fusion Table as a list of polygons with certain values attached to it. For this poverty map, each row represents a Census Tract with values like Tract name, number of people within that tract that live in poverty, etc. And for this map, I made it so each polygon in the Fusion Table has its own, unique number in the “Number” column.

Here’s a run through of what the above code does:

1.  We’ve already declared “layer” as a particular Google Fusion Table layer (see the second box of code above). Now the “layer.enableMapTips” will allow us to add rollover effects to that Fusion Table layer. The “select” option represents all the columns in that Fusion Table layer that you want to use with the rollover effect.

For instance, here’s the Fusion Table I’m calling in the above “enableMapTips” function. Notice how I’ve called all the columns with data (‘Tract’, ‘County’, etc.). I then told it which Fusion Table to look for with “from: 2415095.” Each Fusion Table has its own unique number. The number for my poverty Fusion Table is 2415095, which is called. To find out what number your Fusion Table is, click File > About.

Finally, I’ve told it what column contains the geometry information for this Fusion Table (Again, go through this Poynter walkthrough to find out all you need to know about the geometry field). Mine is simply called “geometry.” Each row in the “geometry” column represents one polygon.

2. The second step is the “google.maps.event.addListener(layer, ‘mouseover’, function(fEvent).” Basically this says “anytime the reader rollovers a polygon, the following will happen.”

In this function, “fEvent”represents the polygon that the reader is currently hovering over. Let’s say I’m rolling over the polygon that is represented by the first row in the Fusion Table. It’s Census Tract 9601 and has the value of “1” in the “Number” column.

Every time a reader rolls over Census Tract 9601, the code “fEvent.row[‘Number’].value” goes into the Google Fusion Table, finds the Census Tract 9601 row and returns the value of the Number column, which is “1.” So var “NumVal” would become “1” when a reader rolls over Census Tract 9601.

The next part changes that polygon’s color. This happens with the “where” statement. This is saying, “when I rollover a polygon, find the polygon in the Fusion Table that represents ‘NumVal’ and change its color.” Since the variable “NumVal” represents the polygon currently being hovered over, this is the polygon that changes colors. For a reader, the output is simple: I rollover a polygon. It changes colors.

In short: I roll over Census Tract 9601, firing of the “google.maps.event.addListener” function. This function finds the value of the “Number” column. It returns “1.” The code then says change the color of the polygon that has a value of “1” in the “Number” column. This is Census Tract 9601, which I rolled over. It changes colors and life is good.


If you go back up to “layer.enableMapTips” in the third box of code, you’ll notice there is an option for “suppressMapTips.” For the poverty map, I have it set to true. But what if you set it to false? Basically, any time a reader hovers over a point or polygon, a small box shows up next to it containing information on that point or polygon. Notice the small yellow box that pops on this example page.

This is a nifty feature and a great replacement for the traditional InfoBox (the box that opens when you click on a point in a Google map). The only problem is the default text size is almost too small to read. How do we change that? Fairly easily:

1. Download a copy of the FusionTips javascript file.

2. Copy the file to the same folder your map is in and add this at the top of your document header:

<script type="text/javascript" src="fusiontips.js"></script>

3. Open the FusionTips file and look for “var div = document.createElement(‘DIV’).” It’s near the top of the Javascript file.

4. This ‘DIV’ represents the MapTips box. By editing this, you can change how the box and its text will display when a reader hovers over a point on the map. For instance, this map of historical places in Iowa used MapTips but the text is larger, the background is white, etc. Here’s what the DIV looks like in my FusionTips javascript file:

FusionTipOverlay.prototype.onAdd = function() {
    var div = document.createElement('DIV'); = "1px solid #999999"; = ".85"; = "absolute"; = "nowrap"; = "#ffffff"; = '13px'; = '10px'; = 'bold'; = '10px'; = '1.3em';
    if (this.style_) {
      for (var x in this.style_) {
        if (this.style_.hasOwnProperty(x)) {
[x] = this.style_[x]

Much better! Here’s the code for this map. And here are my three Fusion Tables.

I hope some of these tips help and if you have any questions, send me an e-mail. I’d be more than happy to help.

Written by csessig

February 7, 2012 at 4:34 pm

A few reasons to learn the command line

leave a comment »

Note: This is my first entry for Lee Enterprises’ data journalism blog. Reporters at Lee newspapers can read the blog by clicking here.

As computer users, we have grown accustomed to what is known as the graphical user interface (GUI). What’s GUI, you ask? Here are a few examples: When you drag and drop a text document into the trash, that’s GUI in action. Or when you create a shortcut on your desktop, that’s GUI in action. Or how about simply navigating from one folder to the next? You guessed it: that’s GUI in action.

GUI, basically, is the process of interacting with images (most notably icons on computers) to get things done on electronic devices. It’s easy and we all do it all the time. But there is another way to accomplish many tasks on a computer: using text-based commands. Welcome to the command line.

So where do you enter these text-based commands and accomplish these tasks? There is a nifty little program called the Terminal on your computer that does the work. If you’ve never opened up your computer’s Terminal, now would be a good time. On my Mac, it’s located in the Applications > Utilities folder.

A scary black box will open up. Trust me, I know: I was scared of it just a few months ago. But I promise there are compelling reasons for journalists to learn the basics of the command line. Here are a few:


1. Several programs created by journalists for journalists require the command line.

Two of my favorite tools out there for journalists come from ProPublica: TimelineSetter and TableSetter.

The first makes it easy to create timelines. We’ve made a few at the Courier. The second makes easily searchable tables out of spreadsheets (more specifically, CSV files), which we’ve also used at the Courier. But to create the timelines and tables, you’ll need to run very basic commands in your Terminal.

It’s worth noting the LA Times also has its own version of TableSetter called TableStacker that offers more customizations. We used it recently to break down candidates running in our local elections. Again, these tables are created after running a simple command.

The New York Times has a host of useful tools for journalists. Some, like Fech, require the command line to run. Fech can help journalists extract data from the Federal Election Commission to show who is spending money on whom in the current presidential campaign cycle.


2. Other programs/tools that journalists can use:

Let’s say you want to pull a bunch of information from a website to use in a story or visualization, but copy and pasting the text is not only tedious but very time consuming.

Why not just scrape the data using a program made in a language like Python or Ruby and put it in a spreadsheet or Word document? After all, computers are great at performing tedious tasks in just a few minutes.

One of my favorite web scraping walkthroughs comes from BuzzData. It shows how to pull water usage rates for every ward in Toronto and can easily be applied to other scenarios (I used it to pull precinct information from the Iowa GOP website). The best way to run this program and scrape the data is to run it through your command line.

Another great walkthrough on data scraping is this one from ProPublica’s Dan Nguyen. Instead of using the Python programming language, like the one above, it uses Ruby. But the goal remains the same: making data manageable for both journalists and readers.

A neat mapping service that is gaining popularity at news organizations is TileMill. Here are a few examples to help get you motivated.

One of the best places to start with TileMill is this walkthrough from the application team at the Chicago Tribune. But beware: you’ll need to know the command line before you start (trust me, I learned the hard way).


3. You’ll impress your friends because the command line kind of looks like the Matrix

And who doesn’t want that?


Okay I’m sort of interested…How do I learn?

I can’t tell people enough how much these two command line tutorials from PeepCode helped me. I should note that each costs $12 but are well worth it, in my opinion.

Also, there is this basic tutorial from HypeXR that will help. And these shortcuts from LifeHacker are also great.

Otherwise, a quick Google or YouTube search will turn up thousands of results. A lot of programmers use the command line and, fortunately, many are willing to help teach others.

Written by csessig

January 31, 2012 at 9:21 am