Open Source Planning

Mommy, what did civic tech look like?

Way back in 2007, residents of NYC could use a snazzy website to request bike parking. Pretty neat. And what did it look like? Sorry, all we have now are a few low-quality screenshots.

image

One day, someone will write a history of today’s interesting time of civic tech. But the author will not be able to see any of the sites as they actually are right now. We won’t know how these sites worked or what kind of information residents shared. We’ll hopefully still have video of demos, and contemporary accounts from press and blog coverage, but that’s mostly PR. What was it really like?

It’s ok, open source on Github is forever! Up to a point. Having the source means we see the building blocks. No content - nothing from users.  The code may not even run, with dependencies on external javascript or data feeds for maps or something else (pretty much everything does). And old code rots. Good luck setting up the original 2011 Wordpress-based Shareabouts today, less than 5 years later. 

So in the future, we may not know what many of the smaller civic tech projects were really like. Even for successful, lasting sites, we won’t see earlier iterations. Apparently there was a great zoning map of Chicago, lost when fusion tables was shut down in 2017. Apparently, there was a bike share map in NYC, only runs on a long-lost Wordpress version from 10+ years ago. Apparently, New Orleans had a map of blighted properties, but it’s so hard to get a raster tile map running now we’re all on vectorOSM12. 

This is not our biggest challenge. How websites look isn’t as important as what they do. Impact matters most. Hopefully we are better at telling stories and measuring participation than we appear to be at archiving. If projects don’t make change in the real world, who cares what they look like. And yet… as a self-critical, aware bunch of people, we need to understand where we came from. 

What to do?

A couple of ideas:

  • The Wayback Machine is already working on this problem, but maps and other interactive elements don’t get saved, and images and styles are often broken. 
  • We could include screenshots in repos with something like pageres? Maybe that becomes part of the process of fully documenting a project. 
  • We could trawl projects listed in civic.json and use an automated screen capture script to document the state of sites regularly. Doesn’t capture the user experience. 

(On my mind because this week I was working on a short history of OpenPlans, and reading Robert Caro’s immensely detailed LBJ bio, constructed from interviews and sources that are several decades old).

Don’t give open source undeserved credit

Della Rucker writes up the OpenPlans-Textizen collaboration for Philly bike share, where a Shareabouts map with SMS input collected around 10,500 public comments. I worked on this project for OpenPlans.

She highlights three reasons to be excited about Philly Bike Share’s approach to public involvement –

  1. a frictionless input process, making it easy to get involved
  2. catchy sidewalk posters, encouraging feedback and cutting through the visual clutter
  3. open source, making it easy to integrate the two tools used on the project.

The post is worth reading, but I don’t agree about the open source argument. Della says: “Because they are both open source, making these two work together — sending Textizen inputs directly into the OpenPlans maps — could be done with a minimum of fuss.”

Actually, the tools worked together because the development teams on both sides were competent, and motivated to create the best possible experience for Philly residents to engage about bike share.

Having good software helped, because the integration was easy (both Textizen and Shareabouts have APIs). And that good architecture is a result of the capability of both teams, which also accounts for the project being quick and cheap, because everything needed for the project was already running (e.g. the SMS gateway of Textizen, OpenPlans’ databases, etc).

The underlying software being open source isn’t relevant here. And it isn’t all open anyway. Shareabouts is open source (here’s the customized code for the Philly bike share map). Textizen started out as open source, but much of its value today comes from proprietary “secret sauce” features that gives users the best possible tools for making sense of community input.

I highlight this not to be a pedant, but because there’s a widespread ideas that “open source” automatically means various positive attributes to a software project. In the case of this bike share map, being open or closed was irrelevant to the project’s success. 

The real success here is having good firms with mature software tools available at reasonable prices.

For short-term projects, tools offered affordably as a service (the software term for “on tap”) are more beneficial to planners than open source. Being able to get a new collaborative map or other tool running quickly and cheaply is potentially very useful - though as we discovered with OpenPlans’ that potential hasn’t yet been exploited by most planners.

Open source still matters in other ways. It’s great that other cities have benefited from the work on open source Shareabouts without needing to involve OpenPlans (unlike, say Textizen or Mindmixer, where no such flexibility exists). Many planning firms have also used Shareabouts, again without needing to engage OpenPlans directly. As for the expensive, complex projects that underpin many city systems, spending tax dollars on bad, proprietary software is not good. And planners and planning need more open source thinking and methods, whatever tools they use.

(If you want to read more about the station selection process for Philly Bike Share, check out this report).

Why can’t city map launches be more like Apple keynotes?

Even the most casual follower of Apple product launches knows the format – after unveiling the new iPod touch or big screen phone, presentation duties are handed over to a developer. He (it usually is) comes up and demos a new game or app running on this new device.

(In case you’ve never seen an Apple keynote. Pic from ubergizmo.com)

These presentations are a great demonstration of the strength of the ecosystem that Apple is fostering. You get great hardware, and great apps. Consumers get good tools, and there’s also an unsubtle hint to developers of the great things they can expect when building software to run on Macs and Apple devices (e.g. keynote exposure, $$$, etc).

Contrast the Apple keynote experience with the typical launch of a city-built web map. No sharing of the spotlight, no acknowledgement of the strength that comes from having multiple users of open data inside and outside city hall. No affirmation of the principles of “dogfooding” data by city users – that the data get better for everyone, city workers can be empowered to do more, economic development and social change can come from an ecosystem powered by city data.

While the high-gloss of Apple presentations are (thankfully) not a part of the municipal tech scene, we could definitely borrow the idea of co-presenting and demonstrating the ecosystem. Much better than the current binary options of entirely city-led tech projects, or entirely external app contests.

Here’s the playbook:

  1. City department prepares to release some data, wants to make a map
  2. City contacts a couple of external developers who put together a small demonstration project - maybe suggesting a couple of areas for investigation that the city’s own project won’t address
  3. City builds their map or data tool internally
  4. At launch of the map, the press release/event shares stage time with the internal team and the external developer. 
  5. City-built app appeals to one audience, city staff get props, Mayor is a visionary, etc
  6. Developers and other data users see a strong ecosystem

(Prompted by the release of NYC DOT’s Vision Zero View. More to say about dogfood and city map making soon).

Wanted: Streetsblog for civic innovation

(thx to @nickgrossman for the pithy headline)

We’ve come a long way with this civic tech thing, building great tools for great cities. But word still travels way, way too slowly. Most of the impact we imagine is still only… imagined.

And we’re imagining something big: tech in cities has the potential to profoundly change how citizens experience government and life. Big, powerful changes that create new economic opportunities and new lives for many people. Done right, dramatic changes to benefit and driven by people who have been marginalized and powerless.

How big? Think change at the scale of the interstate system + mass auto ownership + suburbanization (but much more beneficial). That’s about the right order of magnitude, maybe underestimating a little.

So why, given where we want to be, are our voices so quiet, our visions so weak, and our messages so puny? Changing tech in cities can’t be done with jargon about standards, tame lists of tools, and nerdy guidance on open data. We’re failing everyone this way, failing to put tech to work fixing urgent challenges now.

image

But hold on, what’s the problem? Isn’t there great stuff coming from _____________ (insert your favorite CfA fellows, incubatees, open data leaders, Brigades, etc)?

Yes, there are amazing new tools, necessary data work, smart people, and new businesses. Good stuff all over. Necessary but not sufficient. The civic tech diaspora is still too inwardly focused (exhibit A: this blog post). We’re doing a bad job of communicating beyond the lightning talk/blog frenzy echo-sphere. Consider:

  • Despite big efforts around cataloging (Civic Commons * )  and local guidance (e.g. Sunlight’s nascent local work), there still aren’t good resources available for municipalities to get started. Where resources exist, our crowd tends towards completeness over clarity, enormity over editorial. And that makes it hard for new people to get up to speed.
  • Standards are wonderful but we can’t lead with them. Lead with need (or need + standards in parallel, but that doesn’t rhyme).
  • Focused work like the incredible Smarter Chicago by definition has to keep that focus in their backyard. Same with Reinvent Albany, the NYC Transparency Working Group, others. Too busy to be the Appleseeds of civic tech.
  • New startups are going to sustain this revolution eventually, but they need some help now. Capacity building and awareness raising will prepare cities to be ready (in all aspects – mindset, $$, process, legal).
  • And finally, although Code fellows and the peer network are great, they’re tiny compared to the challenges ahead.

No silver bullet, but here’s a missing component that is within reach: high-quality, informed guidance for cities. Stories about previous successes, briefing notes about areas of opportunity, best practices to follow, local peers to learn from. Not cherry picking highlights that may be hard to replicate, but solid, reliable expertise. Not marketing disguised as guidance. Talking in a language cities also speak. Sounds a little dull, but we have such a story to tell, it’ll be anything but.

This is a fixable problem. Funders could support this story telling very easily in a new or existing organization. It would be cheap (no software developers). This could be a regional or national effort. 

It should be easy to measure impact. And it will have quite the impact: sharing stories and building capacity empowers and activates existing civic and good gov groups, planners, etc, for them to pick up the song and add their voices. Perhaps there’s a bus that goes from town to town, a rolling roadshow of the future… And crucially, this is short term, going out of business within a few years, having catalyzed new energy and thinking in city halls and gov offices all over. A bright light that starts many fires.

It’s easy to throw stones. Demos not memos **, always. But I am disappointed that this emerging sector still has so far to go before it really emerges and makes an impact. Otherwise, what are we doing with our short lives?

image

* lots more could be said about Civic Commons. Benefitting from hindsight, I see the primary failure as not serving a clear user need – it would have been better to build up guidance based on the need cities have. Nobody really needs a catalog.

** “demos not memos” is the perfect tech-minded civic activist tattoo. With “be good or be good at it” on the other bicep.

(Here’s a use case for this new effort: I chatted last week with a town, thinking about their options for 311 - service request tools, etc. Open311 sounded interesting but was completely confusing. This town wants to get it right. So we chatted about the various aspects of standards, tools, open data vs internal management, etc. I suggested some vendors (SeeClickFix, Public Stuff). Afterwards, I went looking for more to send. And it turns out, we lack well written, accessible guidance in digestible forms. And for 311, you can substitute almost all others of city government work)

Easy steps to better open data in NYC

NYC has a great open data law, and decent compliance - lots of new data becoming available. Opening up data is good, but seeing it used to solve real problems is the most desirable end goal.

As an occasional user of the nyc.gov/data portal, I’ve encountered some areas for improvement. Fixing these will make it easier to get open data, which in turn will enable all the great outcomes we want to see. Here are a few observations, with suggested remedies.

On the portal –

Search is horrible in the data portal. It really sucks. The single biggest suck is that search returns user-filtered subsets of data. So for example, if I filter 311 calls about rodents, and save that search, you will be able to find it. Showing derivative data on primary search is a horrible experience for users because you get so many search results.
Suggestion: Fix search.

There’s a secret magic dataset of all city-owned datasets in the portal. For me, it’s transformed the open datas site into something helpful. But you have to know about it (it’s at https://data.cityofnewyork.us/dataset/NYC-OpenData-Catalog/tdsx-cvye). The magic dataset should be much easier to discover. Suggestion: Add a link to the magic list of data.

Update: check out the link on the front page of the data, linking to a dashboard of open data from the city. Big improvement. 

Update II, 10/5: Ok, the dashboard is actually not very helpful. But this buried link is

Metadata is patchy. Datasets could have letter grades for the completeness of metadata, to help the city prioritize datasets that need work (this doesn’t deal with the quality of metadata though)

Suggestion: Add more metadata.

Metadata is hard to browse.
Suggestion: User test the metadata layout and improve it.

Data with addresses is not automatically geocoded! For the city, this should be a data processing step that’s almost free, and integrated somehow into the pipeline of uploading a dataset (whatever that currently is). Where data has been geocoded, include a column indicating the success or otherwise of the geocoding.
Suggestion: Geocode all address data!

Those are the pain points around the data itself. The other big area of improvement is around the human-facing side: customer service for data consumers… 

Answer the questions. Currently, the user request area on the portal is a ghost town. The portal provides some tools for asking feedback, but many questions are not answered. Maybe even most questions.

Suggestion: staff up to answer questions.

Get community assistance in answering questions. Consider a third-party tool for handing data questions. The Stack Overflow model, or Discourse, might be better for a collaborative exploration of issues with data. And the interface will be much better than the question tools built into the Socrata platform. It would be really cool in NYC used the open data Stack Overflow to field and respond to questions.

Suggestion: Bring in the community for some structured data Q&A.

Deal with public requests. There are some good and some wacky requests for new data. Most have been ignored. This is bad. These questions can all be answered and should be. Particularly the non controversial ones – asking for rodent data that’s already covered by some other data set. Again, the stack overflow model could help.

Suggestion: Triage requests: quick nos to data that’s not feasible to open up, or shared elsewhere, or the jurisdiction of the MTA. Put other stuff onto a list for future consideration. 

Foster community-led working groups. I’m just a frustrated data user who wants to see the data portal be better. I want to see more people working with data. There are many others who are smarter than me, who will happily give time to help improve open data in particular topic areas. Showing up for discussions with community groups isn’t a commitment to do anything apart from listen, but it builds up a ton of trust. 

Suggestion: Working groups!

Price transparency is good for civic tech

A few weeks ago at OpenPlans we put our prices for Shareabouts onto our website. Before then, if you wanted to pay OpenPlans to set up a map, we had to talk about it - our prices weren’t secret, and I’ve happy described them on conference panels, but getting the details wasn’t as easy as going to our website and looking.

Price transparency like this is a really good thing for people buying technology for government. We’re chipping away at the appallingly expensive status quo.

I know that the fine people at Civic Insight have done something similar, and they even have a fancy pricing calculator

Shared prices reduces friction for people seeking high-quality tools.

Every phone call or email followup to find out about the cost of tools is a small barrier to doing a better job of community involvement - small barriers that add up enough to stop a busy person. And even for a simple query, that research time that could be better spent on other tasks.

Transparent pricing helps other people advocate for good tools.

We all benefit from a well-informed community inside and outside city government, with realistic expectations of the costs of tools. These tools are also much cheaper than many people expect, but they aren’t free. And what you pay for is extremely good value. Having this info available helps everyone understand the options.

Why keep prices secret? Concern that these might not be the “right” prices perhaps? Sure: we might not be charging enough, or more than some cities want to pay for particular features. As we keep working on adding new features, we will re-evaluate. Perhaps concern about being undercut by others? Or wanting to keep pricing flexible/opaque in case a mythical deep-pocketed client shows up? Neither of these seems like good arguments to me (and they weren’t ones used by anyone at OpenPlans, I should add - we were slow to do this mostly because we’re small and busy).

The prices we’re sharing don’t cover everything, for example special feature development we are often asked to do. Soon, we will add prices for OpenPlans, our planning communication tool. We have more work ahead to give greater openness to the costs of hiring us, but we’re trying.

Spend your time on data tools

I mentioned that teams working on Big Apps should look at data trends, not just make maps. A related observation: you have limited development time, so don’t waste it building an engagement tool. Focus on a data tool.

What does this mean? By “data tool”, I mean something that can be useful to look up or make sense of data about the world around us. For example, recent building permits, or maximum buildable floor area around a location based on current zoning, or something with financial data, or access to health care services. A tool that can imperfectly answer a defined question over and over, maybe for different places and times.

There are heaps of difficult problems out there, and community organizations need help answering them. Often, these organizations understand both a problem and the answer they need, and the right data tool, if sensitively designed, can slot right in and provide answers right away that can lead to immediate positive change. Sure, you have to do some work to identify these problems and the organizations, but the payoff is huge (when measured in social goodness, at least). 

The alternative is the seductive world of engagement tools. Look at all those people on Facebook! Look at these tweets! Surely we can harness just a little bit of this energy to get people engaged online in fixing this problem. Every neighborhood might be different, but they all need this collaborative tool for…. Alas, the answer is almost never building the missing tool.

Instead, the answer to organization challenges like organizing neighbors to care about safer streets, or parents about schools, or anyone about anything, is to meet them face to face. Technology can obviously do a lot to help all along the way, but it won’t replace capacity on the ground and people. And if you’re considering building tools, you likely aren’t also building capacity face to face. That’s not a judgement, just a realistic assessment of how you can spend your valuable and limited time. 

So, if you’re embarking on a development project for social good, go build some data tools.

Trend, not maps

This year’s Big Apps competition is focused on some real issues, including traffic safety. We have tons of open data that can be used to explore these issues (like recently-released crash data), but most responses I’ve seen are maps.

Maps are great, but tools to examine trends are better. For example:

  • Is this district seeing more crashes than last year?
  • Is the past week a “typical” week, or is there something to look into?
  • How do increases in crash numbers in this district compare to our neighbors over there in another district?
  • How does change in this neighborhood rank in comparison to others in the city?

These are questions that people in community organizations and elected officials need answers to.  City-wide mapping tools are just less useful, because they don’t drive towards policy responses.

And it’s not just street safety. Imagine dashboard tools showing trends from open data like 311, or building permits. 

Trend tools are harder to build, but they are so powerful. We need more of them (and a framework to do time and district comparisons on a dataset would be very helpful to get us there). 

UPDATE 5/29: Here’s a great trend dashboard from Make Queens Safer.

 

The only reason I know this, and that other neighborhood leaders know this, is because of government records. Northside neighborhood leaders try to keep up; they’re some of the most hawk-eyed citizens in the city. But often times local government can be the worst enemy in untangling the messes left by these companies.

Neighborhood leaders and housing researchers are force multipliers to counties losing out on property taxes and cities failing to enforce rental license laws and ensure livability.

Government should be doing everything they can to ensure that housing researchers have the convenient access they need to help fight fraud and abuse.

– – “This is what happens when housing data isn’t open and accessible”, by Tony Webster. https://tonywebster.com/2014/01/housing-data-open-minneapolis/

Link: Public process: Don’t botch your online engagement »

Public process: Don’t botch your online engagement:

“In short, their impressive wizbangery can be deceptive, fooling the uninitiated into thinking it’s the tool that really matters, rather than the goal-focused story the tool allows you to tell.” — Scott Doyon on websites for planning

osp, websites, engagement

The tools community boards and council members need

Borough President Gale Brewer hosted a roundtable on data and tool needs on Monday. Manhattan Community Board chairs and Council Members attended. Here’s my condensed list of the needs I heard –

Street safety

  • Need to identify locations for neckdowns, turn signals — identify them, where should they go, where are the dangerous intersections?
  • What is the difference between crash data from NYPD vs DOT, who collects what data?
  • Need to track construction/traffic issues.
  • Want to have a replacement/tools to deal with LMCC closing.

Planning

  • What is the impact of new development (for planning schools, transit, sewage treatment)? Need forecasting tools.
  • Need tools to overlay district info with other data layers.
  • Need affordable housing data – type, expiration, capacity, requirements, what is being built, in the pipeline
  • Need FEMA flood zone maps.
  • Need construction projects mapped, all on a single map
  • Need to map out energy efficiency, green buildings.
  • Need to know commercial/vacancy rate in the community.
  • Need population projections (for schools).
  • Want to be proactive with air rights for Hudson River Park.
  • Need help working with/verifying DOE data.
  • Need health data (no hospital in the district, uses a lot of small community based health centers, hard to get those datasets). 
  • Want to set up a system to stay on top of a retail survey.
  • Want to model shadow impact on parks from tall buildings. 
  • Want to get demographic data by school enrollment zone.


Quality of life

  • Need tools to work with 311 data: construction, noise.
  • Need State Liquor Authority data overlaid with other info tied to a single address. 
  • Need to see more info about liquor license requests – what else do applicants own in the city? What is going on with their applicants before other boards?
  • Want to track places with noise complaints/nuisance reports.
  • Want to map buildings with C violations – not getting turned around quickly enough.
  • Need access to quality of life data/complaints

Process

  • Need a digital complaints form for a CB office – want to see how many complaints come in, how many are resolved, etc.
  • Need to track CB resolutions – send them out, not sure if they are acted on. SLA etc. don’t know how to follow or track.
  • Need better public notifications – meetings, issue notifications, followups.

Link: Walk First tool - what should SF be investing in? »

Walk First tool - what should SF be investing in?:

The City will be investing $17 million over the next five years to improve safety conditions for people walking. Given the City’s limited resources and the need to use this money effectively, you will be asked to prioritize each of 15 pedestrian safety tools (see Tools page for more information) by indicating whether or not you feel that the tool is a low, medium or high funding priority – essentially, how would you spend $17 million on pedestrian safety?

After each selection, you will see the graph at the right change in relation to your choices. This graph shows how your choices affect the total cost, the time to implement and the effectiveness of the solutions.

participation, tool, osp

Don’t make laws to make maps

Hey, legislators! Don’t write laws that require maps (especially those that detail how the map information will be aggregated).

Instead, write laws to open up data. The maps will come. Much easier. Shorter, future-proofed laws. 

If you feel strongly about this, go testify. @transalt will be there, and other open data smarties.

This time, the law under discussion is about crash data in NYC, but the same unfortunate approach already made it into law for crime data. Interactive maps are an excellent tool to making complex data public, but requirements for a city agency to produce the map is not the right approach. Why not?

1. We need tools that answer questions and solve problems, and doing that well requires you to start with those needs, rather than building a generic map. 

2. The track record of government-built maps is not great, maybe because of #1, or the tools they have, or because of internal development practices that don’t involve users, or something else. For example, the SLA liquor license map

3. The track record of researchers and technologists and journalists to build data browsing tools is excellent. For example, excellent crime analysis, insightful 311 analysis, everything WNYC does, Vizzuality’s output etc.

4. There are complex tech problems that talented government technologists should work on. Making an interactive map isn’t one of them.

5. Legislation that is extremely specific seems brittle and prone to letter-of-the-law following later. Especially if a city department decided to be uncooperative in the future. Whereas full disaggregated data is flexible. We already have guidelines for opening up data “right”, no need to re-design this for each different type of data. Getting particular about mapping requirements is the worst sort of over-specifying. For crashes, maybe the aggregation by street segment prevents analysis of intersection safety (for example).