Chris Essig

Walkthroughs, tips and tricks from a data journalist in eastern Iowa

Archive for the ‘Data’ Category

How automation can save you on deadline

leave a comment »

In the data journalism world, data dumps are pretty common occurrences. And often times these dumps are released periodically — every month, every six months, every year, etc.

Despite their periodic nature, these dumps can catch you off guard. Especially if you are working on a million things at once (but nobody does that, right?).

ss

Case in point: Every year the Centers for Medicare & Medicaid Services releases data on how much money doctors received from medical companies. If I’m not mistaken, the centers release the data now because of the great investigative work of ProPublica. The data they release is important and something our readers care about.

Last year, I spent a ton of time parsing the data to put together a site that shows our readers the top paid doctors in Iowa. We decided to limit the number of doctors shown to 100, mostly because we ran out of time.

In all, I spent at least a week on the project last year. But this year, I was not afforded that luxury because the data dump caught us off guard. Instead of a few weeks, we had a few days to put together the site and story.

Fortunately, I’ve spent quite a bit of time in the last year moving away from editing spreadsheets in Excel or Google Docs to editing them using shell scripts.

The problem with GUI editors is they aren’t easily reproducible. If you make 20 changes to a spreadsheet in Excel one year and then the next year you get an updated version of that same spreadsheet, you have to make those exactly same 20 changes again.

That sucks.

Fortunately with tools like CSVKit and Agate, you can write A LOT of Excel-like functions in a shell script or a Python script. This allows you to document what you are doing so you don’t forget. I can’t tell you how many times I’ve done something in Excel, saved the spreadsheet, then came back to it a few days later and completely forgotten what the hell I was doing.

It also makes it so you can automate your data analysis by allowing you to re-run the functions over and over again. Let’s say you perform five functions in Excel and trim 20 columns from your spreadsheet. Now let’s say you want to do that again, either because you did it wrong the first time or because you received new data. Now would you rather run a shell script that takes seconds to perform this task or do everything over in Excel?

Another nice thing about shell scripts is you can hook your spreadsheet into any number of data processing programs. Want to port your data into a SQLite database so you can perform queries? No problem. You can also create separate files for those SQL queries and run them in your shell script as well.

All of this came in handy when this year’s most paid doctors data was released. Sure, I still spent a stressful day editing the new spreadsheets. But if I hadn’t been using shell scripts, I doubt I would have gotten done by our deadline.

Another thing I was able to do was increase the number of doctors listed online. We went from 100 total doctors to every doctor who received $500 or more  (totaling more than 2,000 doctors). This means it’s much more likely that readers will find their doctors this year.

The project is static and doesn’t have a database backend so I didn’t necessarily need to limit the number of doctors last year. We just ran out of time. This year, we were able to not only update the app but give our readers much more information about Iowa doctors. And we did it in a day instead of a week.

The project is online here. The code I used to edit the 15+ gigabytes worth of data from CMS is available here.

 

 

Advertisements

Written by csessig

September 7, 2016 at 4:59 pm

New code: Using a Google spreadsheet to power a mobile site

with 3 comments

I’ve posted a new template on Github that I hope others will find useful. It involves using Tabletop.js to transfer data from a Google spreadsheet onto a mobile website. I’ve tried to create it so you have to do little tweaking of the code that grabs and formats the data so you can focus on collecting and cleaning the data. The template will render the same regardless of what data is in the Google spreadsheet.

It’s initial purpose was to power our guides to weekend festivals, since we have quite a few in the summer. We used it, for instance, for our guide to the National Cattle Congress fair, which was last weekend. And it was designed for mobile users, in particular. You will notice how the website kind of looks like an app you would download on your smartphone with navigation buttons at the button. But it was responsively designed so it will work on all computer sizes.

You can use it for whatever you like. You can also change the code to have the data displayed as you see fit. The whole project is open-sourced.

The code can be found here. If you use it at all, please e-mail a link to you work so I can share it with the world! My e-mail is chris.essig@wcfcourier.com.

Written by csessig

September 18, 2013 at 10:47 am

How to: Using OpenRefine, Excel to clean messy, user-submitted data

with one comment

Note: This is cross-posted from Lee’s data journalism blog, which you can read over there by clicking here.

We at The Courier ran into an interesting problem last week that I’m sure other news organizations run into as well.

Every year, The Courier puts on a Best of the Best competition. We select a series of categories like food, dining,  jewelry stores, etc. and ask our readers to pick the best business in town for each category.

The businesses with the most votes in each category are rewarded by being featured in the newspaper. They also get plaques they can hang at their business to show off their nifty award.

We have several forms that people use to submit their votes. They are allowed to type in their favorite businesses, instead of pick from a field of choices.

We store their votes in a database. We then spit out the data into a spreadsheet with all the votes. Each row is one vote. For week two, we had a spreadsheet with 3,000+ rows of entries.

To view that spreadsheet, click here. We have a row for entry id, category, entrant id and the actual entry. The “entry” field is what we will focus on throughout this walkthrough.

The problem: Everyone has their own spelling for businesses in the “entry” column. For instance, “Famous Footwear” was spelled “Famous Footwear, Cedar Falls” and “Famous Footware”, it sometimes was partially upper-cased and sometimes it was misspelled altogether.

How do we merge all those different spellings for businesses into one spelling and count them up?

We could do it by hand, of course, but that is painful, tedious and time-consuming. We could also pay a company to do it but that could be expensive. At least one company that we know of charges $4,000 for such serves.

Fortunately there is a better option out there that is FREE and not horribly time consuming.

refine_cluster_ss

Here’s how we cleaned this spreadsheet with user-submitted data:

1. First download a program called OpenRefine and open it up. This program was originally created by the fine people at Google and then later open sourced. Its purpose is to clean “dirty” data (like our user-submitted data). Their website has three videos on how to use the program that you should definitely check out before firing it up.

2. With OpenRefine open, create a new project and import in the spreadsheet we want to clean. Then find the column of data we want to clean. For us, all of our user-submitted businesses were in a column called “entry.”

3. In OpenRefine, there is a little arrow next to the column title (“entry”). Click that to see a list of options for cleaning the data. If you look under “Edit cells” and “Common transforms,” you’ll see a list of common edits like trimming the leading and trailing whitespace in the entries. So ” U.S. Cellular ” would be changed to “U.S. Cellular” (notice how the white space is removed).

4. The real magic, however, is in the cluster function. This uses algorithms to find different spellings for the same business and then allows you to pick ONE common spelling for ALL of them.

– This makes it much easier down the road to count how many times a particular business was voted on because we don’t have a wide range of spellings for that business. Instead, we can narrow down it to just a couple of spellings. Or if we are really lucky, we can convert all of the different spellings into one common spelling using the algorithms.

– To pull them up, click “Facet” then “Text facet”. That will pull up a list of unique entries in the “entry” column. If you click “count”, you can see how many times a business was entered. For instance, “Scheels” was voted on 115 times. Then click “Cluster”.

– Clustering is one of OpenRefine’s most powerful tools. When you open this section up, you’ll see a list of clusters with different spellings for what the algorithm thinks is the same business.

– If those are accurate clusters — or if the different spellings listed in the cluster really are different variations for the same business — you can select the check box under “Merge?” to convert all of those spellings in the cluster to the same, single spelling.

– OpenRefine comes packed with several clustering algorithms. You can change “Method,” “Keying Function” etc. to check them all out and see which ones work best for you. When you’re done with one algorithm, you can select “Merge Selected & Re-Cluster” to mess with other algorithms or “Merge Selected & Close” to close out of this section.

– I think it’s best to go through all of the algorithms and play around with them a bit to cluster as many businesses together as you can.

6. Once you are done, export the spreadsheet with the button in the top right corner of the screen and click “Excel.”

Note: We used OpenRefine to cluster together different spellings for the same business. Now we’ll use Excel to create a list of uniquely-titled businesses and how many times those were voted on.

– After we are done, we will still probably have different spellings for the same business. It’s likely that the OpenRefine algorithms will be able to find many of the wide range of spellings for a particular business but not all of them. When you allow users to type in their own business names, you open yourself up to A LOT of spellings for the same thing.

– But the hope is we can narrow this list of spellings down to as few as possible so the manual work of tallying up the votes is as painless as possible.

Now open you spreadsheet up in Excel:

– Right now, we still have a spreadsheet that is about 3,000 rows long. The advantage is we have limited the number of different spellings for all the businesses people have voted on. What we need to do now is create a list of uniquely titled businesses and how many times they were voted on.

6. A lot of people accidentally put in their phone number for the entry. Some entered “====”. Others didn’t fill in the spot at all. We need to get rid of all those.

– I first moved the “entry” column from column E to column A. I then sorted the data from A to Z so the entries with phone numbers are at the top. I deleted those. I then sorted it from Z to A to put the empty entries at the top. And then I deleted those too.

– We can also delete the “entry_id” and “entrant_id” columns because for our purposes, they are useless.

– For those following along, here’s what my spreadsheet looks like now.

– One tricky aspect of this is people can vote for the same business in different categories. So we need to make sure the business and the category of each vote stay to together. That way, a vote for “Ace Hardware” in the “Hardware Store” category doesn’t count the same as a vote for “Ace Hardware” in the “Buy a Lawnmower” category.

7. I did this by merging my entry column with my category column into a new column. This can be done with the concatenate formula:

=CONCATENATE(A2, ” | “, B2)

– A2 is our entry column and B2 is our category column. The ” | ” isn’t necessary but is useful is we want to break apart this column again in the future. We then fill down that function to the last row in the spreadsheet using the fill handle.

– Here’s what my spreadsheet looks like now.

Tip: When using formulas on big spreadsheets, Excel tends to slow down A LOT. So when you are done using a formula, you can copy it and paste just the values (not the formulas) in its place to help Excel run smoother. I do this throughout the walkthrough to make our spreadsheets much easier work with.

8. The next step is to get a list of uniquely-titled businesses and categories. The best walkthrough I found to accomplish this was this one from Microsoft. Complete steps 1-6 under “Count the number of unique values by using a filter” to create a new column of uniquely-titled businesses.

– I also deleted the entry and cat_id columns. We will not need those any more.

– Here’s what my spreadsheet looks like now.

– We are down to under 1,000 rows of unique-titled businesses and categories. And honestly, if I would have messed with the algorithms in OpenRefine more, I could have narrowed that list even farther.

8. Now we have a column for each vote (column A) and column with each unique business and category (column B). We need to count how many times each unique business and category (column B) was voted on — or how many times each unique business and category appears in column A.

– We’ll do this by counting the number of times column B occurred in column A using this SUM formula:

=SUM(IF($A:$A=B2,1,0))

– This basically goes through each row in column A and if it is equal to what is in column B, it adds one. Then its adds all that together. The result is how many times our unique business and category was voted on.

Note: The dollar signs in the formula basically means go through every row in column A.

– Here’s what my spreadsheet looks like now.

9. We’re almost done. All we need to do now is split up our business category column into a column for businesses and a column for categories.

– Do this by first copying and pasting the value of the “unique_entry_cat” column into a new column. We have to do this first, otherwise our sum formula in our “count” column won’t work properly.

– We then break apart that new column using this method and use “|” as the delimiter to split them up. Then delete the old columns and copy and paste the values over the formulas in the spreadsheet.

– Here’s what my spreadsheet looks like now.

10. Finally, copy and paste the values of the columns onto a new spreadsheet and custom sort the spreadsheet first from A to Z for the category column and then largest to smallest in the count column.

– Here’s what the final spreadsheet looks like.

– That’s it! I realize that this is a VERY long-winded blog post and may be confusing to follow at times. If you have any questions, don’t hesitate to e-mail me at chris.essig@wcfcourier.com

Written by csessig

August 16, 2013 at 3:06 pm

How-to: Creating a public salary database

leave a comment »

before_after_teaserNote: This is cross-posted from Lee’s data journalism blog, which you can read over there by clicking here.

People love knowing what public employees earn in a year. They always make for interesting conversation and plenty of controversy.

In honor of Sunshine Week, I’m going to go through the steps of creating a quick and dirty salary database using DataTables.

Here’s the process:

1. Find a data set and clean, clean, clean

For this example, I’m going to use the 2012 Salary Book for public employees in Iowa. The spreadsheet has records for more than 59,000 public employees. And there are plenty of things in the spreadsheet you’ll probably want to clean up.

I’m not going to dive too deeply into cleaning data here except to point out that it is VERY IMPORTANT. I like to first bring my spreadsheets into Excel and simply sort each row from A to Z to find empty cells, cells with funky text or characters, etc. Here’s a handy guide for cleaning data in Excel.

I then use a great program called Open Refine (formerly Google Refine) to help clean the data. Their website includes some video tutorials on how to use the program. Be sure to also read this post on how investigative journalists should use Refine. Finally, this walkthrough also teaches journalists to use Refine.

The process of cleaning data is probably the most important step in creating a database. It make take several hours. But just as you wouldn’t print a story without first proofreading it, you shouldn’t make data available on your site that you haven’t cleaned to the best of your abilities.

2. Hosting your data

When you’re done cleaning the data, you’ll export the spreadsheet in Google Refine. You’ll then you’ll need to upload the data and host it somewhere. You have a few options:

a. Don’t host it at all. Instead follow this method I mentioned in an earlier blog post that converts the spreadsheet into an HTML table, which is then converted into a DataTable. The disadvantage of this method is if you want to update the spreadsheet, you’ll have to do the conversion over and paste the new HTML table into your document. For spreadsheets that are changed repeatedly, this could be a pain.

b. Upload the spreadsheet into Google Drive and use an awesome service called Tabletop.JS to feed the spreadsheet to DataTables. This is quickly becoming one of my preferred ways to host data simply because it allows several journalists and editors to edit the same Google spreadsheet. All their changes will be reflected on your page, which will be updated automatically as the spreadsheet is updated.

And in fact, my hope was to upload our spreadsheet of Iowa public employee salaries to Google and use Tabletop.JS for this project. But I couldn’t because Google only allows spreadsheets with less than 400,000 fields. This spreadsheet, which was 59,000 rows and seven columns, just eclipses Google’s limit.

For an example of Tabletop.JS in action, check out this repo on my Github page. If you have any questions on how it works, shoot me an e-mail or leave a comment on this article.

c. You can also use an open source tool called DataSet from the Miso Project. Instead of hosting it with Google, you just load the CSV file on your own server and pull the data directly from the file. You can convert the CSV information into Javascript arrays, objects, etc., which allows the data to work with DataTables and tons of other awesome Javascript libraries.

The advantage to DataSet is you can work with huge spreadsheets like this one on Iowa public employee salaries and you don’t have to worry about Google’s size limits. The downfall is only one person can edit the spreadsheet at a time. And they’ll have to download the CSV, re-save it, then re-upload it to the server every time they make a change.

3. The code

The first thing you’ll need to do is go to my Github page and download the directory for this project.

I’m going to break down the Javascript file into chunks to give you an idea of what is happening:

// DataTables currency
// Use to sort the table via currency
jQuery.extend( jQuery.fn.dataTableExt.oSort, {
	"currency-pre": function ( a ) {
		a = (a==="-") ? 0 : a.replace( /[^\d\-\.]/g, "" );
		return parseFloat( a );
	},
	
	"currency-asc": function ( a, b ) {
		return a - b;
	},
	
	"currency-desc": function ( a, b ) {
		return b - a;
	}
});

This function allows us to sort the table based on currency. Basically, when the reader firsts loads the database, the employee with the highest salary will be shown. For more information on the different sorting techniques offered by DataTables, click here.

// Where we'll put the data
var newDataSet = [];
 
// Our column headers
// Change these to fit your table
var tableColumnSet =   [
  { "sTitle": "Employee" },
	{ "sTitle": "Department" },
	{ "sTitle": "Total FY12 Salary", "sType": "currency" },
	{ "sTitle": "Gender", "sClass": "center" },
	{ "sTitle": "County", "sClass": "center" },
	{ "sTitle": "Position", "sClass": "center" }
];

This creates a blank array called “newDataSet.” Later, we will push all of the content in our CSV into this array, which will be used by DataTables and displayed on the page. The “tableColumnSet” variable is where we set our column header names. For future projects, you’ll want to change those to match your own CSV file.

Notice also how we set the “Total FY12 Salary” column to currency, which allows us to sort the table by currency.

// Our DataTable information
// Don't need to change anything here
// Unless you want to customize the table format
function showInfo() {
  $('#table_div').html( '<table cellpadding="0" cellspacing="0" border="0" class="table table-striped table-bordered" id="table"></table>' );
 
	// Push the data to the table
	$('#table').dataTable( {
		"bProcessing":true,
		"sPaginationType": "bootstrap",
		"iDisplayLength": 100,
		"aaData": newDataSet,
		"aoColumns": tableColumnSet,
		// Pick which column we will sort by default
		// For this table, we'll sort ascending by Total FY12 Salary
		"aaSorting": [[ 2, "desc" ]],
		"oLanguage": {
			"sLengthMenu": "_MENU_ records per page"
		}
	});
}

DataTables provides us with several different options for displaying the data, including how to sort the data, where to the pull the data from, how many rows to show at a time, etc. We’re putting all this in a function called “showInfo”, which we will call below.

For more on the options available to us with DataTables, click here.

You may also notice that this table is styled using Twitter’s Bootstrap framework. Basically that gives the table the zebra look to it, as well as style the buttons and search field. For more information on working with Bootstrap, click here.

// Load up the CSV using Miso.Dataset
$(document).ready( function() {
  // Change URL to the right path for your CSV
	var ds = new Miso.Dataset({
  		url : 'csvs/salary_book_clean.csv',
  		delimiter : ','
  	});
 
	// Run this after we load our CSV
	ds.fetch({ success : function() {
    	this.each(function(row, rowIndex) {
    		// Change these variables to match your column names
    		var employeeData = row.Employee;
    		var departmentData = row.Department;
    		var salaryData = row.TotalFY12Salary;
    		var genderData = row.Gender;
    		var countyData = row.County;
    		var positionData = row.Position;
    		// Put information in an array and push it to our table
    		// Change these variables to match variables above
    		var myArray = [employeeData, departmentData, salaryData, genderData, countyData, positionData];
			newDataSet.push(myArray);
  		});
    	// Call DataTable function showInfo
		showInfo();
	}});
});

The last part of this file will load when our document is ready. It uses Miso’s DataSet library to pull in the information from our CSV file. Once the information is loading, a loop for each row in our data. A Javascript array is created for each row containing that row’s data and then that is pushed to our newDataSet variable, which we created earlier.

For future projects, make sure you change the variable names in each loop to fit your CSV file. So change the words “Employee”, “Department”, etc. to make the column headers in your CSV file.

Finally, once that is done, we call our “showInfo” function, which puts the content in our CSV on the page using DataTables.

If you have any questions, feel free to leave me a comment or e-mail me directly at chris.essig@wcfcourier.com.

Written by csessig

March 15, 2013 at 10:03 am

Turning Excel spreadsheets into searchable databases in five minutes

with 21 comments

Note: This is cross-posted from Lee’s data journalism blog and includes references to the Blox content management system, which is what Lee newspapers use. Reporters at Lee newspapers can read my blog over there by clicking here.

data_tables_screenshotHere’s a scenario I run into all the time: A government agency sends along a spreadsheet of data to go along with a story one of our reporters is working on. And you want to post the spreadsheet(s) online with the story in a reader-friendly way.

One way is to create a sortable database with the information. There are a couple of awesome options on the table put out by other news organizations. One is TableSetter, published by ProPublica, which we’ve used in the past. The other one is TableStacker, which is a spin off of TableSetter. We have also used this before.

The only problem is making these tables typically take a little bit of leg work. And of course, you’re on deadline.

Here’s one quick and dirty option:

1. First open the spreadsheet in Excel or Google Docs and highlight the fields you want to make into the sortable database.

2. Then go to this wonderful website called Mr. Data Converter and paste the spreadsheet information into the top box. Then select the output as “HTML.”

3. We will use a service called DataTables to create the sortable databases. It’s a great and easy jQuery plugin to create the sortable tables.

4. Now create an HTML asset in Blox and paste in this DataTables template below:

Note: I’ve added some CSS styling to make the tables look better.

<html>
<head>
<link rel="stylesheet" type="text/css" href="http://wcfcourier.com/app/special/data_tables/media/css/demo_page.css">
<link rel="stylesheet" type="text/css" href="http://wcfcourier.com/app/special/data_tables/media/css/demo_table.css">

<style>
table {
	font-size: 12px;
	font-family: Arial, Helvetica, sans-serif;
float: left
}
table th, table td {
    text-align: center;
}

th, td {
	padding-top: 10px;
	padding-bottom: 10px;
	font-size: 14px;
}

label {
	width: 100%;
	text-align: left;
}

table th {
	font-weight: bold;
}

table thead th {
    vertical-align: middle;
}

label, input, button, select, textarea {
    line-height: 30px;
}
input, textarea, select, .uneditable-input {
    height: 25px;
    line-height: 25px;
}

select {
    width: 100px;
}

.dataTables_length {
    padding-left: 10px;
}
.dataTables_filter {
	padding-right: 10px;
}

</style>

<script type="text/javascript" language="javascript" src="http://wcfcourier.com/app/special/data_tables/media/js/jquery.js"></script>
<script type="text/javascript" language="javascript" src="http://wcfcourier.com/app/special/data_tables/media/js/jquery.dataTables.min.js"></script>

<script type="text/javascript" charset="utf-8">
$(document).ready(function() {
$('#five_year').dataTable({
"iDisplayLength": 25
});
});
</script>
</head>

<body>

<--- Enter HTML table here --->

</body>

</html>

– This will link the page to the necessary CSS spreadsheets and Javascript files to get the DataTable working. The other option is go to the DataTable’s website and download the files yourself and post them on your own server, then link to those files instead of the ones hosted by WCFCourier.com.

5. Where you see the text “<— Enter HTML table here —>,” paste in your HTML table from Mr. Data Converter.

6. The last thing you will need to do is create an “id” for the table and link that “id” to the DataTable’s plugin. In the example above, the “id” is “five_year.” It is noted in this line of code in the DataTable template:

<script type="text/javascript" charset="utf-8">
$(document).ready(function() {
$('#five_year').dataTable({
"iDisplayLength": 25
});
});
</script>

– The header on your HTML table that you post into the template will look like so:

<table id="five_year" style="width: 620px;">
  <thead>
    <tr>
      <th class="NAME-cell">NAME</th>
      <th class="2008 Enrollment-cell">2008 Enrollment</th>
      <th class="2012 Enrollment-cell">2012 Enrollment</th>
      <th class="Increase/Decrease-cell">Increase/Decrease</th>
      <th class="Percent Increase/Decrease-cell">Percent Increase/Decrease</th>
    </tr>
  </thead>

– Here’s an live example of two sortable tables. The first table has an “id” of “five_year.” The second has an “id” of “one_year.” The full code for the two tables is available here.

– As an alternative, you can use a jQuery plugin called TableSorter (not to be confused with the TableSorter project mentioned above). The process of creating the table is very similar.

7. That’s it! Of course, DataTables provides several customization options that are worth looking into if you want to make your table look fancier.

Written by csessig

January 3, 2013 at 4:34 pm

Final infographics project

leave a comment »

For the last six weeks, I’ve taken a very awesome online course on data visualization called “Introduction to Infographics and Data Visualization” It is sponsored by the Knight Center for Journalism in Americas and taught by the extremely-talented Albert Cairo. If you have a quick second, check out his website because it’s phenomenal.

Anyways, we have reached the final week of the course and Cairo had us all make a final interactive project. He gave us free reign to do whatever we want. We just had to pick a topic we’re passionate about. Given that I was a cops and courts reporter for a year in Galesburg, Illinois before moving to Waterloo, Iowa, I am passionate about crime reporting. So I decided for my final project I’d examine states with high violent crime rates and see what other characteristics they have. Do they have higher unemployment rates? Or lower education rates? What about wage rates?

Obviously, this is the type of project that could be expanded upon. I limited my final project to just four topics mostly because of time constraints. I work a full time job, you know! Anyways, here’s my final project. Let me know if you have any suggestions for improvement.

teaser

About:

Data: Information for the graphic was collected from four different sources, which are all listed when you click on the graphic. I took the spreadsheets from the listed websites and took out what I wanted, making CSVs for each of the four categories broken down in the interactive.

Map: The shapefile for the United States was taken from the U.S. Census Bureau’s website. Find it by going to this link, and selecting “States (and equivalent)” from the dropdown menu. I then simplified the shapefile by about 90 percent using this website. Simplifying basically makes the lines on the states’ polygons less precise but dramatically reduces the size of the file. This is important because people aren’t going to want to wait all day for your maps to load.

Merging data with map: I then opened that shapefile with an awesome, open-source mapping program called QGIS. I loaded up the four spreadsheets of data in QGIS as well using the green “Add vector layer” button (This is important! Don’t use the blue “Create a Layer from a Delimited Text File” button). The shapefile and all the spreadsheets will now show up on the right side of the screen in QGIS under “Layers.”

Each spreadsheet had a row for the state name, which matched the state names for each row in the shapefile.  It’s important these state names match exactly. For instance, for crime data from the FBI I had to lowercase the state names. So I turned “IOWA” into “Iowa” before loading it into QGIS because all the rows in the shapefile were lowercase (“Iowa” in the above example).

Then you can open the shapefile’s properties in QGIS and merge the data from the spreadsheets with the data in the shapefile using the “Joins” tab. Finally, right click on the shapefiles layer then “Save As” then export it as a GeoJSON file. We’ll use this with the wonderful mapping library Leaflet.

qgis_teaser

Leaflet: I used Leaflet to make the map. It’s awesome. Check it out. I won’t get into how I made the map interactive with Javascript because it’s copied very heavily off the this tutorial put out by Leaflet. Also check it out. The only thing I did differently was basically make separate functions (mentioned in the tutorial) for each of my four maps. There is probably (definitely) a better way to do this but I kind of ran out of time and went with what I had. If you’re looking for the full code, go here and click “View Page Source.”

Design: The buttons used are from Twitter’s Bootstrap. I used jQuery’s show/hide functions to show and hide all the element the page. This included DIVs for the legend, map and header.

GeoJSON: The last thing I did was modify my GeoJSON file. You’ll notice how the top 10 states for violent crime rates are highlighted in black on the maps to more easily compare their characteristics across the maps. Well, I went into the GeoJSON and put all those 10 states attributes at the bottom of the file. That way they are loaded last on the map and thus appear on top of the other states. If you don’t do this, the black outlines for the states don’t show up very well and look like crap. Here’s GeoJSON file for reference.

Hopefully that will help guide others. If you have any questions, feel free to drop me a line. Thanks!

 

Written by csessig

December 11, 2012 at 12:13 am

Going mobile with our news apps

leave a comment »

ImageIt’s been about two months since we launched our crime map for the city of Waterloo and so far, the response has been overwhelmingly positive.

I’ve been proud of our fans on Facebook, in particular, who are using the map to make informed opinions on crime in the area. One of our fans actually counted every dot on the map to counter another fan’s claim that crime only happens on the east side of town. He found crime was almost equally spread across the town.

In hopes that this data will be digested by an even larger audience, I launched the mobile/tablet equivalent of the map last week. The new app is fully responsive thanks to Twitter’s Bootstrap framework. If you haven’t check out Bootstrap, you really should. It makes designing websites for all platforms — desktops, tablets, mobile phones — about as easy as its going to get.

I also went in and modified  a lot of the CSS, changing things like the width of objects on the page from pixels to percentages. This ensures that the app looks great no matter how wide the screen you’re viewing it from is.

Serving up the tablet version of the map wasn’t particularly difficult. It’s basically the same map on our site without the header and footer, which seems to load slower on iPads. It’s also designed to be flexible regardless of how wide your tablet screen is.

The mobile version was more difficult. At this time, the mobile app does not have a map of the crimes. Instead, it’s just the color charts comparing different crimes and a table of all the crime data. I stripped out the map mostly because it’s difficult to click individual points on the map on small mobile screens. But screens continue to get bigger and nicer so hopefully this won’t be a problem in the future.

One pro tip: I set the padding-right CSS property on the table of crimes to 10 percent. This gives a nice cushion to the right of the table, making it easier for people to scroll past it on smartphones with touch screens.

For this project, I went about making the mobile version the completely wrong way: I opted to create just the desktop version at first and then go back and make one for tablets and phones.

Ideally I would have done both at the same time, which is easy to do with Bootstrap. And that’s exactly what I did for another project we launched this weekend on campaign finance reports. The project exams the finance reports for candidates running in four local races. The reports are made available by the Iowa Ethics and Campaign Disclosure Board, which fortunately allows users to export the data into CSV files.

We broke down the data and created bar charts using Bootstrap to compare the figures. The framework has several bar options to easily create the bar charts, which are worth checking out. The best part is they are designed to look good on all platforms.

We also have databases of all the contributions that are searchable. This allows readers to see exactly where the money is coming from.

For this project, I created the mobile and tablet equivalents of app as I was creating the desktop version. When viewing it on a desktop computer, the app is embedded on our website so it has the election header, footer and colors. The same app was created with responsive design in mind. So if you open it on a mobile phone, the app will look just as good on a tablet or smartphone as it does on a desktop computer.

Many studies show that more and more people are getting their news on smartphones. It is imperative that we keep those readers in mind when designed full-scale apps for our websites.

Hopefully we can continue this trend at the Courier and make sure our projects are reaching our full audience.

Written by csessig

October 21, 2012 at 1:58 pm