At every public meeting, citizens are invited to stand up and share their three minutes. Few hear their voices. This project makes it easier for citizens to keep tabs on public meetings. It uses data science and natural language processing to create a more visible dialogue between the public, citizens wishing to effect change, the news media and local government. The project aggregates the content of public meetings and makes them more accessible, easier to track and more shareable.
Empower citizen engagement by allowing users to distill the content of local public meetings into meaningful, shareable insights.
Idea: October 2015
Start Date: January 2017
End Date: on-going
Status: In development
Data Scientist, Journalist
Strategic Partnerships, Media Innovation
Public Comment makes it easier for users to track, search and share the outcome of public meetings.
Currently most people do not attend public meetings. Open government laws compel states and local municipalities to make records of meetings publics, often in the form of video or audio recordings. However,due to their length, few take advantage of this public service. City managers, journalists and members of the general public find these records time consuming to access and difficult to use.
HOW CAN WE MAKE THE CONTENT OF LEGISLATIVE MEETINGS MORE ACCESSIBLE, EASIER TO TRACK AND MORE SHAREABLE?
The average public meeting is 1 hour to 2 hours long, some lasting five or more hours
Members of the public may be interested in a specific issue of a multi-issue agenda
Decision points can occur at any point in a meeting and time code tracking by agenda item is not always readily available
Public Comment uses natural language processing and machine learning to provide searchable and shareable synopsis of recorded meeting content. Mobile-friendly, audio/video snapshot includes:
Speaker identification (as available)
Actions taken (motions, continuances, consent calendar readings, etc.)
HOW IT WORKS
Video feeds from public meetings are identified by users or by our team
Videos are transcribed using an automated transcription system (speech-to-text API)
Transcriptions are then analyzed using natural language processing and machine learning to identify key topics and phrases, people, agenda items and actions.
These results are then imported into a table and used to create a searchable summary or digest of the public discourse at each meeting
Email alerts, mobile notifications and automatically generated podcasts keep users up-to-date on new updates to the meetings they are tracking allowing them to track public meetings over time
Phase 1: Proof of Concept
Product design sprint
Market demo site
Phase 2: Create Prototype
Identify tech stack
Python, Node.js/ React Native, Speech-to-Text API's,