Chris Essig

Walkthroughs, tips and tricks from a data journalist in eastern Iowa

Archive for the ‘computer programming’ Category

D3 formula: Splitting elements into columns

leave a comment »

icons_columnsD3 can be a tricky — but powerful — beast. A month ago, I put together my most complex D3 project to date, which helped explain Iowa’s new Medicaid system

One of the first places I started on this project was building a graph that will take an icon and divide it into buckets. I didn’t see any openly available code that replicated what I was trying to do, so I’d figure I’d post my code online for anyone to use/replicate/steal.

For this project, I put the icons into three columns. In each column, icons are placed side by side until we get four icons in a row, then another row is created. You can see this in action by clicking the button several times. And all of this can be adjusted in the code with the “row_length and “column_length” variables.

To move the icons, I overlaid three icons on top of each other. When the button is clicked, each of the icons gets sent to one of the columns. The icons shrink as they reach their column. After this transition is finished, three more icons are placed on the DOM. And then the whole process starts over.

A bunch of math is used to determine where exactly on the DOM each icon needs to go. Also, we have to keep track of how many icons are on the DOM already, so we can break the icons into new rows if need be.

Data is required to make D3 run so my data is a simple array of [0,1,2]. The array has three values because we have three columns on the page. We do some calculations on the values themselves and then use SVG’s transform attribute to properly place the icons on the DOM. We use D3’s transition function to make it appear like the icons are moving into the icons, not just being placed there.

Hopefully this helps others facing similar problems in D3. If you have any questions, just leave them in the comments.

Written by csessig

May 2, 2016 at 12:30 pm

Save your work because you never know when it will disappear

with 2 comments

We are a few weeks into the new year, but I wanted to look back at the biggest project I worked on in 2015: The redesign of KCRG.com.

While most of my blog posts are full of links, I can’t link to that site. Why? Because it’s gone.

What?

In a series of very unfortunate events, the site we spent many, many months planning and developing is already gone.

The timeline: We started preparing for the redesign, which was BADLY needed, in early 2015. We then built it over the course of several months. Finally, it was launched in July. Then, in a move that surprised every one, KCRG was bought by another company in September.

At the time, I was optimistic that the code could be ported over to their CMS. And the site wouldn’t die.

My optimism was short lived. Gray has a standard website template for all its news sites, and they wanted that template on KCRG.

So in December, the website we built disappeared for good.

 

The KCRG website you see now is the one used and maintained by Gray.

Obviously, this was a big shock for our team. Even worse, the code we wrote was proprietary and requires Newscycle Solutions to parse and display. So even if I wanted to put it on Github, it wouldn’t do anyone any good.

I’m not used to the impermanence of web. When I had my first reporting job in Galesburg, I saved all the newspapers where my stories appeared. And unless my parents’ house catches on fire, those newspapers will last for a long time. They are permanent.

Not so online. Websites disappear all the time. And those who build them have barely any record of their existence.

Projects like PastPages and the Wayback Machine keep screenshots of old websites, which is better than nothing. But their archives are far cries from the living, breathing websites we build. A screenshot can’t show you nifty Javascript.

It’s an eery feeling. What happens in five years? Ten years? Twenty years? Will any of our projects still be online? Even worse: Will technology have changed so much that these projects won’t even be capable of being viewed online? Will online even exist?

Think about websites from 1996. They are long gone. Hell, many sites from two years ago have vanished.

I don’t have good answers. Jacob Harris has mulled this topic and offered some good tips for making your projects last.

But it’s worth pondering when you’re done with projects. What can I do to save my work for the future? I have a directory of all of my projects from my Courier days on an external hard drive. I have an in-process directory for The Gazette as well.

I hold onto them like I did my old newspaper clippings. Although, I’m confident those clippings will last a lot longer than my web projects.

Written by csessig

January 21, 2016 at 11:52 am

Create an easy before, after photo interactive

with one comment

before_after_teaserNote: This is cross-posted from Lee’s data journalism blog, which you can read over there by clicking here.

When we got our first big snow storm of the winter season a few months ago, we tried something a little bit different at the Courier.

Before the storm hit, we asked our Facebook fans to capture before and after photos of the outdoors taken in the same, identical spot. Then we had them submit their photos to us. With those photos, we put together this before/after interactive that was a nice little traffic boost for us and complimented our weather coverage well.

We have done something similar to this in the past: This interactive, for instance, looks at urban sprawl in the Cedar Valley. It uses a slider that the reader needs to click and drag to toggle between the photos.

The slider works very well. However, I loved what the Detroit Free Press did recently when they compared before and after photos of a car manufacturing plant in their area (which Marisa Kwiatkowski wrote about in Lee’s data journalism blog). Instead of using a slider, the Free Press had the photos change based on mouse movement. The result is incredibly smooth and loads quickly.

After digging through their code, I found out it is very easy to do. Here’s how we did it at the Courier.

1. Set up a basic HTML page

<!DOCTYPE html>
<html>
<head>
<title>Before/after demo</title>
</head>
 
<body>
</body>
 
</html>

That’s about as basic as it gets right there.

2. Add your images to the “body” of the HTML page

<div style="position:relative; width:600px; height:460px;" class="trackMe">
    <img src="http://wcfcourier.com/app/special/beforeafter/beforeafter05b.jpg" class="beforeafterphotos" />
    <img src="http://wcfcourier.com/app/special/beforeafter/beforeafter05a.jpg" class="beforeafterphotos" />
</div>

You will want to delete the URLs for our images and replace them with your own. Make sure the images are the same size. You’ll also want to make sure the before and after photos are as similar as they can be to each other. I spent quite a bit of time editing, cropping and rotating the photos users sent to us to have them both line up as best they could.

It is also important to note that the “after” image (beforeafter05b.jpg in the example above) is placed on the page before the “before” image is (beforeafter05a.jpg in the example above).

Lastly, it is very important that you wrap the images in a DIV with the class “trackMe.” And each image needs to be put in the class “beforeafterphotos.”

3. Add this Javascript code in the “head” of the HTML page

<script src="http://code.jquery.com/jquery-latest.min.js" type="text/javascript"></script>
<script type="text/javascript" language="JavaScript">
$(document).ready(function() {
  $('.trackMe').each(function(){
		$(this).children("img:last").mousemove(function(e) {
			var offset = $(this).offset();
			var xpos = (e.pageX - offset.left);
   			var ypos = (e.pageY - offset.top);
			//now to get the first child image width..
			var thisImage = $(this);
			var thisWidth = thisImage.width();
			var pct = Math.round((xpos/thisWidth)*100)/100;
			var ipct = Math.abs(Math.round(((xpos-thisWidth)/thisWidth)*100)/100);
			thisImage.css({ 'opacity' : ipct });
		});
	});
});
</script>

This code basically detects mouse movement over the second of the two images, which is actually the “before” image (see above). It then figures out where the mouse is in relation to the image. Then it sets the opacity of that image based on the mouse’s location.

So if the mouse is to the left of the image, the “before” image’s opacity is set to 100 percent and is fully visible. If the mouse is in the middle of the image, the image’s opacity is set to 50 percent and is half way visible. If it is to the right, the image’s opacity is set to 0 percent and is invisible.

This function is called every time the mouse is moved. The effect for the reader is as they move their mouse from the left side of the image to the right, the “before” image slowly fades out and reveals the “after” image.

4. Add this CSS code to the “head” of the HTML page

</pre>
<style>
.trackMe img.beforeafterphotos {
  top:0 !important;
	left:0 !important;
	position:absolute;
	margin:0 0 15px 0 !important;
}
</style>

This code just makes sure the images are layered on top of one another, instead of one above the other.

5. Full code is below:

<!DOCTYPE html>
<html>
<head>
<title>Before/after demo</title>
 
<style>
.trackMe img.beforeafterphotos {
  top:0 !important;
	left:0 !important;
	position:absolute;
	margin:0 0 15px 0 !important;
}
</style>
 
<script src="http://code.jquery.com/jquery-latest.min.js" type="text/javascript"></script>
<script type="text/javascript" language="JavaScript">
$(document).ready(function() {
  $('.trackMe').each(function(){
		$(this).children("img:last").mousemove(function(e) {
			var offset = $(this).offset();
			var xpos = (e.pageX - offset.left);
   			var ypos = (e.pageY - offset.top);
			//now to get the first child image width..
			var thisImage = $(this);
			var thisWidth = thisImage.width();
			var pct = Math.round((xpos/thisWidth)*100)/100;
			var ipct = Math.abs(Math.round(((xpos-thisWidth)/thisWidth)*100)/100);
			thisImage.css({ 'opacity' : ipct });
		});
	});
});
</script>
</head>
 
<body>
 
<div style="position:relative; width:600px; height:460px;" class="trackMe">
	<img src="http://wcfcourier.com/app/special/beforeafter/beforeafter05b.jpg" class="beforeafterphotos" />
	<img src="http://wcfcourier.com/app/special/beforeafter/beforeafter05a.jpg" class="beforeafterphotos" />
</div>
 
 
</body>
</html>

That’s all it takes!

6. Add more images

If you want to add more images, just repeat step 2 with the new images. For the complete code that I used on the snow before/after project, click here.

Written by csessig

February 25, 2013 at 3:09 pm

Final infographics project

leave a comment »

For the last six weeks, I’ve taken a very awesome online course on data visualization called “Introduction to Infographics and Data Visualization” It is sponsored by the Knight Center for Journalism in Americas and taught by the extremely-talented Albert Cairo. If you have a quick second, check out his website because it’s phenomenal.

Anyways, we have reached the final week of the course and Cairo had us all make a final interactive project. He gave us free reign to do whatever we want. We just had to pick a topic we’re passionate about. Given that I was a cops and courts reporter for a year in Galesburg, Illinois before moving to Waterloo, Iowa, I am passionate about crime reporting. So I decided for my final project I’d examine states with high violent crime rates and see what other characteristics they have. Do they have higher unemployment rates? Or lower education rates? What about wage rates?

Obviously, this is the type of project that could be expanded upon. I limited my final project to just four topics mostly because of time constraints. I work a full time job, you know! Anyways, here’s my final project. Let me know if you have any suggestions for improvement.

teaser

About:

Data: Information for the graphic was collected from four different sources, which are all listed when you click on the graphic. I took the spreadsheets from the listed websites and took out what I wanted, making CSVs for each of the four categories broken down in the interactive.

Map: The shapefile for the United States was taken from the U.S. Census Bureau’s website. Find it by going to this link, and selecting “States (and equivalent)” from the dropdown menu. I then simplified the shapefile by about 90 percent using this website. Simplifying basically makes the lines on the states’ polygons less precise but dramatically reduces the size of the file. This is important because people aren’t going to want to wait all day for your maps to load.

Merging data with map: I then opened that shapefile with an awesome, open-source mapping program called QGIS. I loaded up the four spreadsheets of data in QGIS as well using the green “Add vector layer” button (This is important! Don’t use the blue “Create a Layer from a Delimited Text File” button). The shapefile and all the spreadsheets will now show up on the right side of the screen in QGIS under “Layers.”

Each spreadsheet had a row for the state name, which matched the state names for each row in the shapefile.  It’s important these state names match exactly. For instance, for crime data from the FBI I had to lowercase the state names. So I turned “IOWA” into “Iowa” before loading it into QGIS because all the rows in the shapefile were lowercase (“Iowa” in the above example).

Then you can open the shapefile’s properties in QGIS and merge the data from the spreadsheets with the data in the shapefile using the “Joins” tab. Finally, right click on the shapefiles layer then “Save As” then export it as a GeoJSON file. We’ll use this with the wonderful mapping library Leaflet.

qgis_teaser

Leaflet: I used Leaflet to make the map. It’s awesome. Check it out. I won’t get into how I made the map interactive with Javascript because it’s copied very heavily off the this tutorial put out by Leaflet. Also check it out. The only thing I did differently was basically make separate functions (mentioned in the tutorial) for each of my four maps. There is probably (definitely) a better way to do this but I kind of ran out of time and went with what I had. If you’re looking for the full code, go here and click “View Page Source.”

Design: The buttons used are from Twitter’s Bootstrap. I used jQuery’s show/hide functions to show and hide all the element the page. This included DIVs for the legend, map and header.

GeoJSON: The last thing I did was modify my GeoJSON file. You’ll notice how the top 10 states for violent crime rates are highlighted in black on the maps to more easily compare their characteristics across the maps. Well, I went into the GeoJSON and put all those 10 states attributes at the bottom of the file. That way they are loaded last on the map and thus appear on top of the other states. If you don’t do this, the black outlines for the states don’t show up very well and look like crap. Here’s GeoJSON file for reference.

Hopefully that will help guide others. If you have any questions, feel free to drop me a line. Thanks!

 

Written by csessig

December 11, 2012 at 12:13 am

How We Did It: Waterloo crime map

with 3 comments

Note: This is cross-posted from Lee’s data journalism blog. Reporters at Lee newspapers can read my blog over there by clicking here.

Last week we launched a new feature on the Courier’s website: A crime map for the city of Waterloo that will be updated daily Monday through Friday.

The map uses data provided by the Waterloo police department. It’s presented in a way to allow readers to make their own stories out of the data.

(Note: The full code for this project is available here.)

Here’s a quick run-through of what we did to get the map up and running:

1. Turning a PDF into manageable data

The hardest part of this project was the first step: Turning a PDF into something usable. Every morning, the Waterloo police department updates their calls for service PDF with the latest service calls. It’s a rolling PDF that keeps track of about a week of calls.

The first step I took was turning the PDF into a HTML document using the command line tool PDFtoHTMLFor Mac users, you can download it by going to the command line and typing in “brew install pdftohtml.” Then run “pdftohtml -c (ENTER NAME OF PDF HERE)” to turn the PDF into an HTML document.

The PDF we are converting is basically a spreadsheet. Each cell of the spreadsheet is turned into a DIV with PDFtoHTML. Each page of the PDF is turned into its own HTML document. We will then scrape these HTML documents using the programming language Python, which I have blogged about before. The Python library that will allow us to scrape the information is Beautiful Soup.

The “-c” command adds a bunch of inline CSS properties to these DIVs based on where they are on the page. These inline properties are important because they help us get the information off the spreadsheet we want.

All dates and times, for instance, are located in the second column. As a result, all the dates and times have the exact same inline left CSS property of “107” because they are all the same distance from the left side of the page.

The same goes for the dispositions. They are in the fifth column and are farther from the left side of the page so they have an inline left CSS property of “677.”

We use these properties to find the columns of information we want. The first thing we want is the dates. With our Python scraper, we’ll grab all the data in the second column, which is all the DIVs that have an inline left CSS property of “107.”

We then have a second argument that uses regular expressions to make sure the data is in the correct format i.e. numbers and not letters. We do this to make sure we are pulling dates and not text accidently.

The second argument is basically an insurance policy. Everything we pull with the CSS property of “107” should be a date. But we want to be 100% so we’ll make sure it’s integers and not a string with regular expressions.

The third column is the reported crimes. But in our converted HTML document, crimes are actually located in the DIV previous to the date + time DIV. So once we have grabbed a date + time DIV with our Python scraper, we will check the previous DIV to see if it matches one of the seven crimes we are going to map. For this project, we decided not to map minor reports like business checks and traffic stops. Instead we are mapping the seven most serious reports.

If it is one of our seven crimes, we will run one final check to make sure it’s not a cancelled call, an unfounded call, etc. We do this by checking the disposition DIVs (column five in the spreadsheet), which are located before the crime DIVs. Also remember that all these have an inline left CSS property of “677”.

So we check these DIVs with our dispositions to make sure they don’t contain words like “NOT NEEDED” or “NO REPORT” or “CALL CANCELLED.”

Once we know it’s a crime that fits into one of our seven categories and it wasn’t a cancelled call, we add the crime, the date, the time, the disposition and the location to a CSV spreadsheet.

The full Python scraper is available here.

2. Using Google to get latitude, longitude and JSON

The mapping service I used was Leaflet, as opposed to Google Maps. But we will need to geocode our addresses to get latitude and longitude information for each point to use with Leaflet. We also need to convert our spreadsheet into a Javascript object file, also known as a JSON file.

Fortunately that is an easy and quick process thanks to two gadgets available to us using Google Docs.

The first thing we need to do is upload our CSV to Google Docs. Then we can use this gadget to get latitude and longitude points for each address. Then we can use this gadget to get the JSON file we will use with the map.

3. Powering the map with Leaflet, jQRangeSlider, DataTables and Bootstrap

As I mentioned, Leaflet powers the map. It uses the latitude and longitude points from the JSON file to map our crimes.

For this map, I created my own icons. I used a free image editor known as Seashore, which is a fantastic program for those who are too cheap to shell out the dough for Adobe’s Photoshop.

The date range slider below the map is a very awesome tool called jQRangeSlider. Basically every time the date range is moved, a Javascript function is called that will go through the JSON file and see if the crimes are between those two dates.

This Javascript function also checks to see if the crime has been selected by the user. Notice on the map the check boxes next to each crime logo under “Types of Crimes.”

If the crime is both between the dates on the slider and checked by the users, it is mapped.

While this is going on, an HTML table of this information is being created below the map. We use another awesome tool called DataTables to make that table of crimes interactive. With it, readers can display up to a 100 records on the page or search through the records.

Finally, we create a pretty basic bar chart using the Progress Bars made available by Bootstrap, an awesome interface released by the people who brought us Twitter.

Creating these bars are easy: We just need to create DIVs and give them a certain class so Bootstrap knows how to style them. We create a bar for each crime that is automatically updated when we tweak the map

For more information on progress bars, check out the documentation from Bootstrap. I also want to thank the app team at the Chicago Tribune for providing the inspiration behind the bar chart with their 2012 primary election app.

The full Javascript file is available here.

4. Daily upkeep

This map is not updated automatically so every day, Monday through Friday, I will be adding new crimes to our map.

Fortunately, this only takes about 5-10 minutes of work. Basically I scrape the last few pages of the police’s crime log PDF, pull out the crimes that are new, pull them into Google Docs, get the latitude and longitude information, output the JSON file and put that new file into our FTP server.

Trust me, it doesn’t take nearly as long as it sounds to do.

5. What’s next?

Besides minor tweaks and possible design improvements, I have two main goals for this project in the future:

A. Create a crime map for Cedar Falls – Cedar Falls is Waterloo’s sister city and like the Waterloo police department, the Cedar Falls police department keeps a daily log of calls for service. They also post PDFs, so I’m hoping the process of pulling out the data won’t be drastically different that what I did for the Waterloo map.

B. Create a mobile version for both crime maps – Maps don’t work tremendously well on the mobile phone. So I’d like to develop some sort of alternative for mobile users. Fortunately, we have all the data. We just need to figure out how to display it best for smartphones.

Have any questions? Feel free to e-mail me at chris.essig@wcfcourier.com.

Courses, tutorials and more for those looking to code

with 3 comments

Note: This is cross-posted from Lee’s data journalism blog. Reporters at Lee newspapers can read my blog over there by clicking here.

Without a doubt, there is an abundance of resources online for programmers and non-programers alike to learn to code.

This, of course, is great news for journalists like us who are looking to use programming to make visualizations, scrape websites or simply pick up a new skill.

Here’s a list of courses and tutorials I’ve found in the last couple months that have either helped me personally or look very promising:

1. Codecademy

Is 2012 the year of code? The startup service Codecademy sure thinks it is. They have made it their mission to teach every one who is willing how to code within one year. The idea was so intriguing that the New York Times ran a front page story (at least online) on it.

Basically, users create an account with the service and every week they are sent new exercises that will teach them how to code. The first exercises focused on Javascript. Now, users are moving into HTML and CSS. Each exercise takes a couple hours to complete and build off the previous week’s exercsies. And best of all, it’s FREE.

If you are a huge nerd like me, you’ll gladly spend your free time completing the courses.

2. Coursera

Want to take courses from Stanford University, Princeton University, University of Michigan and University of Pennsylvania for free? Yeah, I didn’t really think it was possible either until I found Coursera, which offers a wide variety of courses in computer science and other topics.

Right now, I am enrolled in Computer Science 101, which is a six-week course that focuses on learning the basics. Each week, you are e-mailed about an hour of video lectures, as well as exercises based on those lectures. There is also a discussion forum so you can meet your peers. This isn’t nearly as time consuming as Codecademy is, which might be appealing to some.

3. Udacity

Like Coursesra, Udacity offers a number of computer science classes on beginner, intermediate and advanced topics. The classes are also based on video lectures put together by some very, very smart people. I have not used this service, however, so I can’t speak to it too much. It looks promising though. And who wouldn’t want to learn how to program a robotic car?

4. Code School

This service offers screencasts on a host of topics like Javascript, jQuery, Ruby, HTML, CSS and more. The downside, however, is this service does cost: $20 a month or $55 a screencast. If you are looking to try it out, check out their free beginner’s screencast on the Javascript library jQuery, which is the best beginner’s introduction to jQuery I’ve seen. They also have a free screencast for the Ruby programming language.

5. PeepCode

If you are looking for screencasts but are on a tighter budget, check out PeepCode and their list of programming screencasts. Each are about $12, are downloadable and typically include source code for the programs to help you follow along at home. One of my favorites is “Meet the Command Line,” which will get you started with the Unix Command Line. Be warned though because some of their screencasts are geared towards more advanced users. A good understanding of programming is recommended before diving into some of these (An exception is the command line tutorial mentioned above).

6. Net Tuts+

Many of the tutorials on this site are geared towards programmers wanting to learn very specific things or solve specific problems. This tutorial, for instance, runs through how to make borders in CSS. And this one deals with the Command Line text editor called Vim. So if you have a particular problem but don’t have a ton of time to sit through video tutorials, you might want to check out this site’s extensive catalog.

7. ScraperWiki

Web scraping is a great skill for journalists to have because it can help us pull a large amount of information from websites in a matter of seconds. If you are looking for a place to start, check out some of the screencasts offered by ScraperWiki, a service that specializes in — you guessed it — web scraping.

8. Coding blogs

The number of blogs out there devoted to coding and programming is both vast and impressive. Two of my favorite are Life and Code and Baby Steps in Data Journalism. Both are geared towards journalists. In fact, many of the sites I listed here were initially posted on one of these blogs.

– Got a cool website that has helped you out?

I’d love to hear about it! Feel free to leave a comment or e-mail me at chris.essig@wcfcourier.com

Written by csessig

May 3, 2012 at 8:22 am