Chris Essig

Walkthroughs, tips and tricks from a data journalist in eastern Iowa

Archive for the ‘Multimedia journalism’ Category

New adventures

leave a comment »

Some exciting personal news: I’ll be joining the Cedar Rapids Gazette and KCRG as their Interactive News Developer at the end of the month. My last day at the Waterloo-Cedar Falls Courier is Friday.

The new job will let me develop full time, meaning I will be coding more than I have. And I will continue to work in a newsroom, which is awesome. Needless to say, I’m very excited about the move.

I spent the last three years The Courier, and as this blog shows, I learned a ton in that time span. I will greatly miss working there. And more importantly, I’ve very, very proud of all the awesome work we did while I was there.

Here’s to new adventures!

Written by csessig

May 15, 2014 at 2:37 pm

Creating responsive maps with Leaflet, Google Docs

with 9 comments

wloo_history_teaserNote: This is cross-posted from Lee’s data journalism blog, which you can read over there by clicking here.

Quick: Name five people you know who don’t have a smartphone.

Stumped? Yeah, me too. The fact is more and more people have smartphones and are using them to keep up with the world.

What does that mean for news app developers? We need to be especially conscience of the mobile platform and make sure everything we build for the web is compatible on the smallest of screens.

One great way to do this is creating apps through responsive design. The idea behind responsive design is creating one web page for all users, as opposed to making separate pages for desktop and mobile computers.

Then we simply add, rearrange, subtract or tweak features on a web page based on the size of the browser the user has when they are viewing the app.

Maps can be difficult to manage on mobile platforms, especially when you add in legends, info boxes, etc. But they are not impossible. Fortunately Leaflet, an alternative to Google maps, is designed to work especially well on mobile platforms.

In this example, we will be loading data into a Google spreadsheet and using Leaflet to map the data on a responsive map.

1. Learn a little bit of Leaflet

Before we start, it would probably be best if you familiarize yourself with Leaflet. Fortunately, their website has some wonderful walk-throughs. I’d recommend going through this one before going anything further.

2. Grab the code

Now, go to my Github page and download the template.

The Readme file includes instructions on how to set up a basic map using Google spreadsheets and Tabletop.js, which is a wonderful tool that allows us to do all kinds of things with data in a Google spreadsheet, including map it using Leaflet.

3. Edit your index.html page

After you have followed the instructions on my Github page, you should have a map ready to go.

All you have to do is go into the index.html page and edit the title of the map, as well as add your own information into the “sidebar_content” div. Also make sure you add your name to the credits because you deserve credit for this awesome map you are putting together.

4. How does it work?

Now open up your map in a browser. If you rearrange the browser window size, you’ll notice that the map rearranges its size. The other components on the page also automatically readjust.

Some of this is done with the Bootstrap web framework, which was designed with responsive designing in mind.

I’ve also added my own CSS. One easy thing I’ve done with elements on the page is declare their widths and heights using percentages instead of pixels. This ensures that the components will automatically be adjusted regardless of the screen size.

Take a look at our css/styles.css file to get an idea of what I’m talking about: 

/* Body */
body {
    padding-left: 0px;
    padding-right: 0px;
  margin: 0;
    height: 100%;
}

html {
	height: 100%;
}


/* Map */
#map {
	position: absolute;
	float: left;
	top: 1%;
	height: 98%;
	width: 100%;
	z-index: 1;
}

The map’s height is 98 percent and width is 100 percent, ensuring it changes its size when the browser changes its size. If we set it to 600 pixels wide, the map would stay 600 pixels wide, even when the browser was adjusted.

– You’ll notice some other changes. For instance, if you have a wide screen, the map’s sidebar will be on the right side of the screen. We did this by using absolute positioning to place the sidebar and its content on the page:

/* Sidebar */
#sidebar {
  position: absolute;
	top: 2%;
	right: 1%;
	height: 96%;
	width: 30%;
	z-index: 2;
	border: 1px solid #999;
	padding-left: 1%;
	padding-right: 1%;
	background-color: #FFFFFF;
    background-color:rgba(255,255,255,0.9);
}

#sidebar h3 {
	line-height: 30px;
}

#sidebar_content {
	float: left;
	width: 30%;
	height: 70%;
	position: fixed;
	overflow: auto;
	padding-top: 5px;
}

The sidebar’s “right” position is set to 1 percent. This ensures that the sidebar will appear only 1 percent from the right side of the page. Additionally, its “top” position is set to 2 percent. This, effectively, pushes it to the top right corner of the screen.

We also used percentages to declare widths, heights and padding lengths for the sidebar.

– You’ll also notice when your browser is reduced drastically, the content of the sidebar disappears off the page. Instead, we have just the title of the sidebar at the top of the page. This is done with CSS media queries:

/* Styles from mobile devices */
@media (max-width: 625px) {
  
  #sidebar_content {
  	display: none;
	}

  /* Sidebar */
  #sidebar {
		position: relative;
		margin-top: 0%;
		float: left;
		left: 0%;
		right: 0%;
		top: 0%;
		padding-left: 2%;
		padding-right: 2%;
		height: 35px;
		width: 96.5%;
	}

}

Basically what the above code is saying is: If the browser is 625 pixels wide or smaller, apply the following CSS styles. These styles would therefore apply to almost all mobile phones. So what you are saying to the browser is: If this is a mobile browser, apply these styles to the elements on the page.

The first thing we do is hide the “sidebar_content” div, which is within our main “sidebar” div. Besides the “sidebar_content” div, we also have a div within the “sidebar” div called “sidebar_header” for our title. The template sets the title to “Tabletop to Leaflet” initially, although you should change that to match your project.

We hide the “sidebar_content” div with the property “display: none.” Hiding it ensures that the only thing left in our “sidebar” div is the title. Then the sidebar is pushed to the top left corner of the page using absolute positioning.

So what do we do with that information we have hidden? We put it in another div using some Javascript. Then we toggle that div from hidden to visible using a button with the class “toggle_description.” This toggle feature is enabled using jQuery.

From our js/script.js file:

// Toggle for 'About this map' and X buttons
// Only visible on mobile
isVisibleDescription = false;
// Grab header, then content of sidebar
sidebarHeader = $('#sidebar_header').html();
sidebarContent = $('#sidebar_content').html();
// Then grab credit information
creditsContent = $('#credits_content').html();
$('.toggle_description').click(function() {
  if (isVisibleDescription === false) {
		$('#description_box_cover').show();
		// Add Sidebar header into our description box
		// And 'Scroll to read more...' text on wide mobile screen
		$('#description_box_header').html(sidebarHeader + '<div id="scroll_more"><strong>Scroll to read more...</strong></div>');
		// Add the rest of our sidebar content, credit information
		$('#description_box_content').html(sidebarContent + '<br />' + 'Credits:' + creditsContent);
		$('#description_box').show();
		isVisibleDescription = true;
	} else {
		$('#description_box').hide();
		$('#description_box_cover').hide();
		isVisibleDescription = false;
	}
});

The above code first grabs the information from the “sidebar_content” div, then places it in the “description_box” div. It also sets our toggle function, which is activated when the user clicks on the button with the class “toggle_description.”

– The “description_box” div is also styled similarly to the “sidebar” div. The big difference is the “description_box” div is hidden by default because we only want it shown if we are on a mobile phone. The button with the class “toggle_description” is also hidden by default.

From our css/styles.css file:

/* 'About this map' button, description box */
/* Mobile only */
.toggle_description {
  display: none;
	z-index: 8;
	position: relative;
	float: right;
    right: 0%;
    top: 0%;
}

#description_box_cover {
	display: none;
	z-index: 10;
	position: absolute;
	top: 0%;
	width: 100%;
	height: 100%;
    background-color: #444444;
    background-color:rgba(44,44,44,0.9);
}

#description_box {
	position: absolute;
	display: none;
	z-index: 11;
	width: 92%;
	height: 93%;
	padding-top: 1%;
	padding-left: 1%;
    padding-right: 1%;
    left: 2.5%;
    top: 2.5%;
    border: 1px solid #999;
    background-color: #FFFFFF;
    background-color:rgba(255,255,255,0.9);
}

#description_box h3 {
	padding-bottom: 0px;
	line-height: 15px;
}

Now we do the opposite with the “toggle_description” when compared to what we did with the “sidebar_content” div.

With the “sidebar_content” div, we had it shown by default then hidden on mobile phones using CSS media queries. With the button, we hide it by default and then show it on mobile phones using a CSS property of “display: inline.”

From our css/styles.css file:

/* Styles from mobile devices */
@media (max-width: 625px) {

	.toggle_description {
		display: inline;
	}

}

As a noted above, when someone clicks on that button, jQuery toggles between hidden and shown on the button with the class “toggle_description”. It is hidden by default, so it is shown when the user first clicks  it. Then when the user clicks the blue X button (which also has the class of “toggle description”), the box disappears.

A similar philosophy is in place to hide and show the credits box.

That should give you a good idea of what is happening with this map. Feel free to fork the repo and create your own awesome maps.

Have any questions? Don’t hesitate to leave a comment.

Turning Excel spreadsheets into searchable databases in five minutes

with 21 comments

Note: This is cross-posted from Lee’s data journalism blog and includes references to the Blox content management system, which is what Lee newspapers use. Reporters at Lee newspapers can read my blog over there by clicking here.

data_tables_screenshotHere’s a scenario I run into all the time: A government agency sends along a spreadsheet of data to go along with a story one of our reporters is working on. And you want to post the spreadsheet(s) online with the story in a reader-friendly way.

One way is to create a sortable database with the information. There are a couple of awesome options on the table put out by other news organizations. One is TableSetter, published by ProPublica, which we’ve used in the past. The other one is TableStacker, which is a spin off of TableSetter. We have also used this before.

The only problem is making these tables typically take a little bit of leg work. And of course, you’re on deadline.

Here’s one quick and dirty option:

1. First open the spreadsheet in Excel or Google Docs and highlight the fields you want to make into the sortable database.

2. Then go to this wonderful website called Mr. Data Converter and paste the spreadsheet information into the top box. Then select the output as “HTML.”

3. We will use a service called DataTables to create the sortable databases. It’s a great and easy jQuery plugin to create the sortable tables.

4. Now create an HTML asset in Blox and paste in this DataTables template below:

Note: I’ve added some CSS styling to make the tables look better.

<html>
<head>
<link rel="stylesheet" type="text/css" href="http://wcfcourier.com/app/special/data_tables/media/css/demo_page.css">
<link rel="stylesheet" type="text/css" href="http://wcfcourier.com/app/special/data_tables/media/css/demo_table.css">

<style>
table {
	font-size: 12px;
	font-family: Arial, Helvetica, sans-serif;
float: left
}
table th, table td {
    text-align: center;
}

th, td {
	padding-top: 10px;
	padding-bottom: 10px;
	font-size: 14px;
}

label {
	width: 100%;
	text-align: left;
}

table th {
	font-weight: bold;
}

table thead th {
    vertical-align: middle;
}

label, input, button, select, textarea {
    line-height: 30px;
}
input, textarea, select, .uneditable-input {
    height: 25px;
    line-height: 25px;
}

select {
    width: 100px;
}

.dataTables_length {
    padding-left: 10px;
}
.dataTables_filter {
	padding-right: 10px;
}

</style>

<script type="text/javascript" language="javascript" src="http://wcfcourier.com/app/special/data_tables/media/js/jquery.js"></script>
<script type="text/javascript" language="javascript" src="http://wcfcourier.com/app/special/data_tables/media/js/jquery.dataTables.min.js"></script>

<script type="text/javascript" charset="utf-8">
$(document).ready(function() {
$('#five_year').dataTable({
"iDisplayLength": 25
});
});
</script>
</head>

<body>

<--- Enter HTML table here --->

</body>

</html>

– This will link the page to the necessary CSS spreadsheets and Javascript files to get the DataTable working. The other option is go to the DataTable’s website and download the files yourself and post them on your own server, then link to those files instead of the ones hosted by WCFCourier.com.

5. Where you see the text “<— Enter HTML table here —>,” paste in your HTML table from Mr. Data Converter.

6. The last thing you will need to do is create an “id” for the table and link that “id” to the DataTable’s plugin. In the example above, the “id” is “five_year.” It is noted in this line of code in the DataTable template:

<script type="text/javascript" charset="utf-8">
$(document).ready(function() {
$('#five_year').dataTable({
"iDisplayLength": 25
});
});
</script>

– The header on your HTML table that you post into the template will look like so:

<table id="five_year" style="width: 620px;">
  <thead>
    <tr>
      <th class="NAME-cell">NAME</th>
      <th class="2008 Enrollment-cell">2008 Enrollment</th>
      <th class="2012 Enrollment-cell">2012 Enrollment</th>
      <th class="Increase/Decrease-cell">Increase/Decrease</th>
      <th class="Percent Increase/Decrease-cell">Percent Increase/Decrease</th>
    </tr>
  </thead>

– Here’s an live example of two sortable tables. The first table has an “id” of “five_year.” The second has an “id” of “one_year.” The full code for the two tables is available here.

– As an alternative, you can use a jQuery plugin called TableSorter (not to be confused with the TableSorter project mentioned above). The process of creating the table is very similar.

7. That’s it! Of course, DataTables provides several customization options that are worth looking into if you want to make your table look fancier.

Written by csessig

January 3, 2013 at 4:34 pm

Final infographics project

leave a comment »

For the last six weeks, I’ve taken a very awesome online course on data visualization called “Introduction to Infographics and Data Visualization” It is sponsored by the Knight Center for Journalism in Americas and taught by the extremely-talented Albert Cairo. If you have a quick second, check out his website because it’s phenomenal.

Anyways, we have reached the final week of the course and Cairo had us all make a final interactive project. He gave us free reign to do whatever we want. We just had to pick a topic we’re passionate about. Given that I was a cops and courts reporter for a year in Galesburg, Illinois before moving to Waterloo, Iowa, I am passionate about crime reporting. So I decided for my final project I’d examine states with high violent crime rates and see what other characteristics they have. Do they have higher unemployment rates? Or lower education rates? What about wage rates?

Obviously, this is the type of project that could be expanded upon. I limited my final project to just four topics mostly because of time constraints. I work a full time job, you know! Anyways, here’s my final project. Let me know if you have any suggestions for improvement.

teaser

About:

Data: Information for the graphic was collected from four different sources, which are all listed when you click on the graphic. I took the spreadsheets from the listed websites and took out what I wanted, making CSVs for each of the four categories broken down in the interactive.

Map: The shapefile for the United States was taken from the U.S. Census Bureau’s website. Find it by going to this link, and selecting “States (and equivalent)” from the dropdown menu. I then simplified the shapefile by about 90 percent using this website. Simplifying basically makes the lines on the states’ polygons less precise but dramatically reduces the size of the file. This is important because people aren’t going to want to wait all day for your maps to load.

Merging data with map: I then opened that shapefile with an awesome, open-source mapping program called QGIS. I loaded up the four spreadsheets of data in QGIS as well using the green “Add vector layer” button (This is important! Don’t use the blue “Create a Layer from a Delimited Text File” button). The shapefile and all the spreadsheets will now show up on the right side of the screen in QGIS under “Layers.”

Each spreadsheet had a row for the state name, which matched the state names for each row in the shapefile.  It’s important these state names match exactly. For instance, for crime data from the FBI I had to lowercase the state names. So I turned “IOWA” into “Iowa” before loading it into QGIS because all the rows in the shapefile were lowercase (“Iowa” in the above example).

Then you can open the shapefile’s properties in QGIS and merge the data from the spreadsheets with the data in the shapefile using the “Joins” tab. Finally, right click on the shapefiles layer then “Save As” then export it as a GeoJSON file. We’ll use this with the wonderful mapping library Leaflet.

qgis_teaser

Leaflet: I used Leaflet to make the map. It’s awesome. Check it out. I won’t get into how I made the map interactive with Javascript because it’s copied very heavily off the this tutorial put out by Leaflet. Also check it out. The only thing I did differently was basically make separate functions (mentioned in the tutorial) for each of my four maps. There is probably (definitely) a better way to do this but I kind of ran out of time and went with what I had. If you’re looking for the full code, go here and click “View Page Source.”

Design: The buttons used are from Twitter’s Bootstrap. I used jQuery’s show/hide functions to show and hide all the element the page. This included DIVs for the legend, map and header.

GeoJSON: The last thing I did was modify my GeoJSON file. You’ll notice how the top 10 states for violent crime rates are highlighted in black on the maps to more easily compare their characteristics across the maps. Well, I went into the GeoJSON and put all those 10 states attributes at the bottom of the file. That way they are loaded last on the map and thus appear on top of the other states. If you don’t do this, the black outlines for the states don’t show up very well and look like crap. Here’s GeoJSON file for reference.

Hopefully that will help guide others. If you have any questions, feel free to drop me a line. Thanks!

 

Written by csessig

December 11, 2012 at 12:13 am

How We Did It: Waterloo crime map

with 3 comments

Note: This is cross-posted from Lee’s data journalism blog. Reporters at Lee newspapers can read my blog over there by clicking here.

Last week we launched a new feature on the Courier’s website: A crime map for the city of Waterloo that will be updated daily Monday through Friday.

The map uses data provided by the Waterloo police department. It’s presented in a way to allow readers to make their own stories out of the data.

(Note: The full code for this project is available here.)

Here’s a quick run-through of what we did to get the map up and running:

1. Turning a PDF into manageable data

The hardest part of this project was the first step: Turning a PDF into something usable. Every morning, the Waterloo police department updates their calls for service PDF with the latest service calls. It’s a rolling PDF that keeps track of about a week of calls.

The first step I took was turning the PDF into a HTML document using the command line tool PDFtoHTMLFor Mac users, you can download it by going to the command line and typing in “brew install pdftohtml.” Then run “pdftohtml -c (ENTER NAME OF PDF HERE)” to turn the PDF into an HTML document.

The PDF we are converting is basically a spreadsheet. Each cell of the spreadsheet is turned into a DIV with PDFtoHTML. Each page of the PDF is turned into its own HTML document. We will then scrape these HTML documents using the programming language Python, which I have blogged about before. The Python library that will allow us to scrape the information is Beautiful Soup.

The “-c” command adds a bunch of inline CSS properties to these DIVs based on where they are on the page. These inline properties are important because they help us get the information off the spreadsheet we want.

All dates and times, for instance, are located in the second column. As a result, all the dates and times have the exact same inline left CSS property of “107” because they are all the same distance from the left side of the page.

The same goes for the dispositions. They are in the fifth column and are farther from the left side of the page so they have an inline left CSS property of “677.”

We use these properties to find the columns of information we want. The first thing we want is the dates. With our Python scraper, we’ll grab all the data in the second column, which is all the DIVs that have an inline left CSS property of “107.”

We then have a second argument that uses regular expressions to make sure the data is in the correct format i.e. numbers and not letters. We do this to make sure we are pulling dates and not text accidently.

The second argument is basically an insurance policy. Everything we pull with the CSS property of “107” should be a date. But we want to be 100% so we’ll make sure it’s integers and not a string with regular expressions.

The third column is the reported crimes. But in our converted HTML document, crimes are actually located in the DIV previous to the date + time DIV. So once we have grabbed a date + time DIV with our Python scraper, we will check the previous DIV to see if it matches one of the seven crimes we are going to map. For this project, we decided not to map minor reports like business checks and traffic stops. Instead we are mapping the seven most serious reports.

If it is one of our seven crimes, we will run one final check to make sure it’s not a cancelled call, an unfounded call, etc. We do this by checking the disposition DIVs (column five in the spreadsheet), which are located before the crime DIVs. Also remember that all these have an inline left CSS property of “677”.

So we check these DIVs with our dispositions to make sure they don’t contain words like “NOT NEEDED” or “NO REPORT” or “CALL CANCELLED.”

Once we know it’s a crime that fits into one of our seven categories and it wasn’t a cancelled call, we add the crime, the date, the time, the disposition and the location to a CSV spreadsheet.

The full Python scraper is available here.

2. Using Google to get latitude, longitude and JSON

The mapping service I used was Leaflet, as opposed to Google Maps. But we will need to geocode our addresses to get latitude and longitude information for each point to use with Leaflet. We also need to convert our spreadsheet into a Javascript object file, also known as a JSON file.

Fortunately that is an easy and quick process thanks to two gadgets available to us using Google Docs.

The first thing we need to do is upload our CSV to Google Docs. Then we can use this gadget to get latitude and longitude points for each address. Then we can use this gadget to get the JSON file we will use with the map.

3. Powering the map with Leaflet, jQRangeSlider, DataTables and Bootstrap

As I mentioned, Leaflet powers the map. It uses the latitude and longitude points from the JSON file to map our crimes.

For this map, I created my own icons. I used a free image editor known as Seashore, which is a fantastic program for those who are too cheap to shell out the dough for Adobe’s Photoshop.

The date range slider below the map is a very awesome tool called jQRangeSlider. Basically every time the date range is moved, a Javascript function is called that will go through the JSON file and see if the crimes are between those two dates.

This Javascript function also checks to see if the crime has been selected by the user. Notice on the map the check boxes next to each crime logo under “Types of Crimes.”

If the crime is both between the dates on the slider and checked by the users, it is mapped.

While this is going on, an HTML table of this information is being created below the map. We use another awesome tool called DataTables to make that table of crimes interactive. With it, readers can display up to a 100 records on the page or search through the records.

Finally, we create a pretty basic bar chart using the Progress Bars made available by Bootstrap, an awesome interface released by the people who brought us Twitter.

Creating these bars are easy: We just need to create DIVs and give them a certain class so Bootstrap knows how to style them. We create a bar for each crime that is automatically updated when we tweak the map

For more information on progress bars, check out the documentation from Bootstrap. I also want to thank the app team at the Chicago Tribune for providing the inspiration behind the bar chart with their 2012 primary election app.

The full Javascript file is available here.

4. Daily upkeep

This map is not updated automatically so every day, Monday through Friday, I will be adding new crimes to our map.

Fortunately, this only takes about 5-10 minutes of work. Basically I scrape the last few pages of the police’s crime log PDF, pull out the crimes that are new, pull them into Google Docs, get the latitude and longitude information, output the JSON file and put that new file into our FTP server.

Trust me, it doesn’t take nearly as long as it sounds to do.

5. What’s next?

Besides minor tweaks and possible design improvements, I have two main goals for this project in the future:

A. Create a crime map for Cedar Falls – Cedar Falls is Waterloo’s sister city and like the Waterloo police department, the Cedar Falls police department keeps a daily log of calls for service. They also post PDFs, so I’m hoping the process of pulling out the data won’t be drastically different that what I did for the Waterloo map.

B. Create a mobile version for both crime maps – Maps don’t work tremendously well on the mobile phone. So I’d like to develop some sort of alternative for mobile users. Fortunately, we have all the data. We just need to figure out how to display it best for smartphones.

Have any questions? Feel free to e-mail me at chris.essig@wcfcourier.com.

Tip: Embedding Vmix videos into a Google Fusion table

leave a comment »

Note: This is cross-posted from Lee’s data journalism blog. Reporters at Lee newspapers can read my blog over there by clicking here.

For any map makers out there, here’s a walk-through on how to take a Vmix video and post it into a Google Fusion table. It’s a perfect follow-up to the tutorials Chris Keller and I held a month ago with Lee journalists on how to build maps with Google Fusion Tables.

1. First we need a Vmix video so post one onto your website like you normally would by uploading it to Vmix and pulling it into the Blox CMS. I’m going to use this video in our example.

2. View the source of the page by right clicking on the page and selecting “View page source.” Then search for a DIV with the class of “vmix-player”. You can do this by searching for “vmix-player”.

3. Under that should be a Javascript file with a source that starts with “http://media.vmixcore.com/&#8221;. Click on that link to open up the source in a new window. You should now see a screen with a huge “Not Found” warning. But don’t be discouraged.

4. Now view the source of that page by doing the same thing you did before (Right click > “View page source”).

5. You should now see a page with three variables: t, u and h. The variable we want is “h”, which is the object tag we will embed into the map.

The page should look something like this.

6. Clean up the variable tag by removing these tags:

var h = ”

(This marks the beginning of the variable.)

h += ”

(There should be several of these. Basically this adds whatever follows  it to the “h” variable, hints the plus sign.)

“;

(These are at the end of every line of code.)

7. Now we need to replace all references to the “t” and “u” variables with their actual value. You’ll notice that “t” and “u” appear in the code for the “h” variable and are surrounded by plus signs. Basically that is just telling Javascript to put whatever “t” equals into that spot. We’ll do that manually:

So replace:

” + t + ”

With:

location.href

And replace:

” + u + ”

With:

http://cdn-akm.vmixcore.com/player/2.0/player.swf?player_id=48df95747124fbbad1aea98cee6e46e4

(Your link will be different than mine)

– It’s important to note that you need to delete the equal signs and the plus signs before and after “t” and “u”. Your final code should not have a double quote next to a single quote. We should have just single quotes around “location.href” and our “http://cdn-adk.vmixcore.com&#8221; link.

– It’s also important to note that when we grab the “t” and “u” variables, we don’t grab the semi-colon at the end of the variable or the quotes around the “u” variable.

For instance, let’s say we have this for our “u” variable:

var u = "http://cdn-akm.vmixcore.com/player/2.0/player.swf?player_id=48df95747124fbbad1aea98cee6e46e4";

So on our movie parameter, we’re turned this line of code:

<param name='movie' value='" + u + "'/>

Into this line of code:

<param name='movie' value='http://cdn-akm.vmixcore.com/player/2.0/player.swf?player_id=48df95747124fbbad1aea98cee6e46e4'/>

– Repeat this for every reference of “t” and “u” in the code.

Our final piece of code should look like this garbled mess.

8. The final step is to post that object tag above into a Google Fusion Table. The easiest way to do this is create a new column called “video” and simply put the above code into that column’s row.

9. Then configure the info window (Visualize > Map > Configure info window) and make sure the “video” column name appears in the HTML of the info window.

If you want to pull in just the “video” column and nothing else, you’re HTML would look like this:

<div class="googft-info-window">{video}</div>

The result looks something like this map. Click on the blue marker labeled “3” to see the video.

I am using the Fusion Table API to make my map instead of using the embed code provided by Google. It seems to work better with videos. If you are interested in see my full code for this map, click here.

That’s it. If you have any questions or something doesn’t make sense, please leave a comment or e-mail at chris.essig@wcfcourier.com.

Written by csessig

July 28, 2012 at 5:49 pm

Spreadsheet tutorials to help journalists clean and format data

leave a comment »

Note: This is cross-posted from Lee’s data journalism blog. Reporters at Lee newspapers can read my blog over there by clicking here.

I was talking to colleague the other day who was interested in getting into computer programming and more data projects. He asked where the best place to start was.

My gut reaction was to tell him to learn the basics of spreadsheets. Almost all of the data I have used with projects — whether they end up being a map or a graphic — were initially set up in spreadsheet form.

Most of us are familiar with spreadsheets and have likely worked with Microsoft Excel. Agencies both big and small often gather spreadsheets of useful information for us to use with our stories.

The problem is most of the time the data isn’t formatted right or contains inaccuracies like misspellings. That is where the magic of spreadsheet formulas can come in to help organize your data.

Here’s a few resources you might find handy for working with spreadsheets.

1. Poynter: How journalists can use Excel to organize data for stories

This walkthrough is great because it is written directly for journalists. It is also intended for beginners so those with no spreadsheet knowledge will be able to keep up. Finally, the walkthrough is intended for both print and online journalists. So journalists who have no intention of making visualizations will still find a handful of features in Excel to help them with their day-to-day reporting.

2. My Favorite (Excel) Things

This PDF provided by Mary Jo Webster at the St. Paul Pioneer Press is a great, concise list of Excel formulas she uses all the time. It includes formulas on how to format dates, run sums and even run if-else statements in Excel. It’s one of my favorite resources for Excel and definitely worth bookmarking.

3. Google Docs – Spreadsheet resources

Not a fan of Excel or don’t have it installed on your work computer? Fortunately you can make a spreadsheet with Google by logging into Google Drive and clicking “Create > Spreadsheet.” The best part is spreadsheets you create with Google can be accessed from any computer as long as you log in with your Google account.

Here’s some resources for getting started with Google spreadsheets:

4. Google Refine resources

Sometimes you need more than just spreadsheet formulas to clean dirty data. That’s where the powerful Google Refine program can come into play. The program was designed to clean dirty data by finding inconsistencies in your spreadsheets. It can also help you sort data, add to data, transform it from one service to another and much, much more.

Here’s some resources you might find handy:

 5. NICAR-L e-mail list

Still stuck? Fortunately, there is a wonderful community of computer-assisted reporters who are more than willing to help others out. If you want great information on spreadsheets or any other data journalism topic, check out the National Institute of Computer-Assisted Reporting’s email list. Questions on Excel come up almost every day.

Have any other useful resources not listed here? E-mail me at chris.essig@wcfcourier.com.

Written by csessig

June 25, 2012 at 9:37 am

Data journalism resources

leave a comment »

Note: This is cross-posted from Lee’s data journalism blog. Reporters at Lee newspapers can read my blog over there by clicking here.

Okay this is going to be a quick blog post since we and every other Lee newspaper are switching over our sites to Templates 2.0. Here’s a couple of neat websites I’ve ran across in the past couple of weeks that may help with your next data story and/or visualization:

1. Data Journalism Handbook

Want to know about the importance of data journalism today and what is possible for you to do at your own newspaper? Then this handbook is a must read. One of the great aspects of the book is it shows how data can be used not only to make impressive graphics but also great stories. And best of all it is free and growing.

2. DataVisualization: Selected tools

Here’s a great resource for many of the tools used by data journalists across the world. For instance, Google Fusion Tables is the go-to service for map-makers, while Google Refine is a great program for cleaning data to use with maps, visualizations or just stories. Also check out ColorBrewer to help you find great color patterns.

One Javascript library worth checking out is D3.js, which has been used by the New York Times to create some jaw-dropping visualizations. Also worth noting are two blog posts from software developer Jim Vallandingham. One deals with making bubble charts with D3.js. The second shows how to use the library to work with past versions of Internet Explorer. This is a must read for sites like ourselves that have a high percentage of readers who use Internet Explorer.

3. Data Stories

If you are like me, you love listening to podcasts, especially after work when your eyes can no longer stare at your computer screen. This site features a series of podcasts on data visualization. Topics include how to learn data visualization, when to use animated graphics and other advice from people smarter than I will ever be. So check it out if you are looking for something new to listen to while you hit the gym, bike trail or couch.

Written by csessig

June 5, 2012 at 2:00 pm

Turning Blox assets into timelines: Part 2

with 2 comments

Note: This is cross-posted from Lee’s data journalism blog. Reporters at Lee newspapers can read my blog over there by clicking here.

Also note: You will need to run on the Blox CMS for this to work. That said you could probably learn a thing or two about webscraping even if you don’t use Blox.

For part one of this tutorial, click here. For part three, click here

 

On my last blog, I discussed how you can turn Blox assets into a  timeline using a tool made available by ProPublica called TimelineSetter.

If you recall, most of the magic happens with a little Python script called Timeline.py. It scrapes information from a page and puts it into a CSV file, which can then be used with TimelineSetter.

So what’s behind this Timeline.py file? I’ll go through the code by breaking it down into chunks. The full code is here and is heavily commented to help you follow along.

(NOTE: This python script is based off this tutorial from BuzzData. You should definitely check it out!)

– The first part of the script is basically the preliminary work. We’re not actually scraping the web page yet. This code first imports the necessary libraries for the script to run. We are using a Python library called BeautifulSoup that was designed for web scraping.

We then create a CSV to put the data in with the open attribute and create an initial header row in the CSV file with the write attribute.  Also be sure to enter the URL of the page you want to scrape.

Note: For now, ignore the line “now = datetime.datetime.now().” We will discuss it later.

import urllib2
from BeautifulSoup import BeautifulSoup
import datetime
import re

now = datetime.datetime.now()

# Create a CSV where we'll save our data. See further docs:
# http://propublica.github.com/timeline-setter/#csv
f = open('timeline.csv', 'w')

# Make the header rows. These are based on headers recognized by TimelineSetter.
f.write("date" + "," + "description" + "," + "link" + "," + "html" + "\n")

# URL we will scrape
url = 'http://wcfcourier.com/test/scrape/dunkerton/'
page = urllib2.urlopen(url)
soup = BeautifulSoup(page)

– Before we go any further, we need to look at the page we are scraping, which in this example is this page. It’s basically a running list of articles about a particular subject. All of these stories will go on the timeline.

Now we’ll ask: what do we actually want to pull from this page? For each article we want to pull: the headline, the date, the photo, the first paragraph of the story and the link to the article.

Now we need to become familiar with the HTML of the page so we can tell BeautifulSoup what HTML attributes we want to pull from it. Go ahead and open the page up and view its source (Right click > View page source for Firefox and Chrome users).

One of the easiest things we can do is just search for the headline of the first article. So type in “Mayor’s arrest rattles Dunkerton.” This will take us to the chunk of code for that article. You’ll notice how the headline and all the other attributes for the story are contained in a DIV with the class “story-block.’

All stories on this page are formatted the same so every story is put into a DIV with the class ‘story-block.’ Thus, the number of DIVs with the class ‘story-block’ is also equal to the number of articles on the page we want to scrape.

– For the next line of code, we will put that number (whatever it may be) into a variable called ‘events.’ The line after that is what is known as a ‘for loop.’ These two lines tell BeautifulSoup how many times we want to run the ‘for loop.’

So if we have five articles we want to scrape, the ‘for loop’ will run five times. If we have 25 articles, it will run 25 times.

events = soup.findAll('div', attrs={'class': 'story-block'})
for x in events:

– Inside the ‘for loop,’ we need to tell it what information from each article we want to pull. Now go back to the source of the page we are scraping and find the headline, the date, the photo, the first paragraph of the story and the link to the article. You should see that:

  • The date is in a paragraph tag with the class ‘story more’
  • The link appears several times, including within a tag called ‘fb:like,’ which is the Facebook like button people can click to share the article on Facebook.
  • The headline is in a h3 tag, which is a header tag.
  • The first few paragraphs of the story are contained within a DIV with the id ‘blox-story-text.’ Note: In the Python script, we will tell BeautifulSoup to pull only the first paragraph.
  • The photo is contained within an img tag, which shouldn’t be a surprise.

So let’s put all of that in the ‘for loop’ so it knows what we want from each article. The code below uses BeautifulSoup syntax, which you can find out about by reading their documentation.

    # Information on the page that we will scrape
    date = x.find('p', attrs={'class': 'story-more'})('em')
    link = x.find('fb:like')['href']
    headline = x.find('h3').text
    description = x.find('div', attrs={'id': 'blox-story-text'})('p', limit=1)
    image = x.find('img')

One note about the above code: The ‘x’ is equal to the number that the ‘for loop’ is on. For example, say we want to scrape 20 articles. The first time we run the ‘for loop,’ the ‘x’ will be equal to one. The second time through, the ‘x’ will be equal to two. The last time through, it will be equal to 20.

We use the ‘x’ so we pull information from a different article each time we go through the ‘for loop’. The first time through the ‘for loop,’ we will pull information from the first article because the ‘x’ will be equal to one. And the second time through, we pull information from the second article because the ‘x’ will be equal to two.

If we didn’t use ‘x,’ we’d run through the ‘for loop’ 20 times but we’d pull the same information from the same article each time. The ‘x’ in combination with the ‘for loop’ basically tells BeautifulSoup to start with one article, then move onto the next and then the next and so on until we’ve scraped all the articles we want to scrape.

– Now you should be well on your way to creating timelines with Blox assets. For the third and final part of this tutorial, we will just clean up the data a little bit so it looks like nice on the page. Look for the final post of this series soon!

Written by csessig

March 7, 2012 at 2:21 pm

A few reasons to learn the command line

leave a comment »

Note: This is my first entry for Lee Enterprises’ data journalism blog. Reporters at Lee newspapers can read the blog by clicking here.

As computer users, we have grown accustomed to what is known as the graphical user interface (GUI). What’s GUI, you ask? Here are a few examples: When you drag and drop a text document into the trash, that’s GUI in action. Or when you create a shortcut on your desktop, that’s GUI in action. Or how about simply navigating from one folder to the next? You guessed it: that’s GUI in action.

GUI, basically, is the process of interacting with images (most notably icons on computers) to get things done on electronic devices. It’s easy and we all do it all the time. But there is another way to accomplish many tasks on a computer: using text-based commands. Welcome to the command line.

So where do you enter these text-based commands and accomplish these tasks? There is a nifty little program called the Terminal on your computer that does the work. If you’ve never opened up your computer’s Terminal, now would be a good time. On my Mac, it’s located in the Applications > Utilities folder.

A scary black box will open up. Trust me, I know: I was scared of it just a few months ago. But I promise there are compelling reasons for journalists to learn the basics of the command line. Here are a few:

 

1. Several programs created by journalists for journalists require the command line.

Two of my favorite tools out there for journalists come from ProPublica: TimelineSetter and TableSetter.

The first makes it easy to create timelines. We’ve made a few at the Courier. The second makes easily searchable tables out of spreadsheets (more specifically, CSV files), which we’ve also used at the Courier. But to create the timelines and tables, you’ll need to run very basic commands in your Terminal.

It’s worth noting the LA Times also has its own version of TableSetter called TableStacker that offers more customizations. We used it recently to break down candidates running in our local elections. Again, these tables are created after running a simple command.

The New York Times has a host of useful tools for journalists. Some, like Fech, require the command line to run. Fech can help journalists extract data from the Federal Election Commission to show who is spending money on whom in the current presidential campaign cycle.

 

2. Other programs/tools that journalists can use:

Let’s say you want to pull a bunch of information from a website to use in a story or visualization, but copy and pasting the text is not only tedious but very time consuming.

Why not just scrape the data using a program made in a language like Python or Ruby and put it in a spreadsheet or Word document? After all, computers are great at performing tedious tasks in just a few minutes.

One of my favorite web scraping walkthroughs comes from BuzzData. It shows how to pull water usage rates for every ward in Toronto and can easily be applied to other scenarios (I used it to pull precinct information from the Iowa GOP website). The best way to run this program and scrape the data is to run it through your command line.

Another great walkthrough on data scraping is this one from ProPublica’s Dan Nguyen. Instead of using the Python programming language, like the one above, it uses Ruby. But the goal remains the same: making data manageable for both journalists and readers.

A neat mapping service that is gaining popularity at news organizations is TileMill. Here are a few examples to help get you motivated.

One of the best places to start with TileMill is this walkthrough from the application team at the Chicago Tribune. But beware: you’ll need to know the command line before you start (trust me, I learned the hard way).

 

3. You’ll impress your friends because the command line kind of looks like the Matrix

And who doesn’t want that?

 

Okay I’m sort of interested…How do I learn?

I can’t tell people enough how much these two command line tutorials from PeepCode helped me. I should note that each costs $12 but are well worth it, in my opinion.

Also, there is this basic tutorial from HypeXR that will help. And these shortcuts from LifeHacker are also great.

Otherwise, a quick Google or YouTube search will turn up thousands of results. A lot of programmers use the command line and, fortunately, many are willing to help teach others.

Written by csessig

January 31, 2012 at 9:21 am