My Application to the Holman Prize, or How You Can Help Me Help You with STEM Data Visualization Accessibility
There's something I've needed to do for a while.
Libaudioverse is a massive digital signal processing project, and it is the nature of digital signal processing that much of the debugging process would go much more easily if I could examine data. But I can't because I'm blind and the tools to let blind people do so do not exist.
All we have at the moment are graphing calculators. Input an equation, output a graph. This is nearly useless for anything but high school mathematics. The real world is not made up of homework polynomials, nor are you finding where it crosses the y axis over and over.
Fixing it is time consuming. My plan was to take Libaudioverse as far as it can go without, then write the bare minimum for my personal use. This may have been useful to others, but it wouldn't have done much outside the specific domain of DSP. Also, documentation? What documentation?
But then I found out about the Holman Prize via my friend Chris Hofstader, who writes a prominent blog on accessibility. it's up to $25000 of funding.
Suddenly, producing useful, documented tools with convenient UIs for a whole variety of domains is on the table. Digital signal processing? Definitely. Machine learning. Economics. Weather data. The list goes on and on. We can't access any of these conveniently. It's not impossible, but no one has written the tools to do it. I don't know why this is--I suspect a general lack of funding for anything past the K-12 age group--but I have the chance. I want to take it.
You can help me by going to my 90-second video pitch and liking it, then sharing both the video and this blog post as widely as possible. The LightHouse for the Blind and Visually impaired in San Francisco is explicitly monitoring social media, and one way to win it is to have the most Youtube likes. If I can secure the funding, this project will have a nearly immediate and amazingly large impact on every blind person in STEM. If you aren't blind or in STEM, that's okay: you can still do both of these, and it may very well help a whole bunch of people who are.
The rest of this post goes into technical details, what I think I can do, and generally how I plan to do it if I win. Before putting on my scientist/programmer hat, let me just close by saying that this will be free and open source software. Using it will cost nothing, and anyone who knows programming and needs more functionality will be free to help add to it.
Pieces We Have Now
The current state of the art in graphing for the blind is either The Orion TI-84+ or The Audio Graphing Calculator from ViewPlus. The former is a modified TI-84 and the latter is a piece of software, but they both do essentially the same thing: provide the functionality of a scientific calculator, then add on some basic graphing components. I don't personally own either, though I have seen both in action. The sad truth is that both are quite expensive. If I was still functioning at a lower level of mathematics--if everything I needed to graph wasn't arrays of complex numbers from Scipy filter design functions--I'd buy them. But they don't do much for people at my level, which is the problem I'm trying to solve. I consider this a relevant prerequisite because it has given us a workable technique for simple line graphs.
The idea of the graphing component is very simple: stereo pan represents the x axis and pitch the y axis. If you're curious, I used Libaudioverse to prepare a couple demos: this audio graph of x^2 and this one of sin(x).
On a sidenote, sine waves are annoying. Try doing some graphing that sounds like this with a headache. But it's what everyone else is using. My tools are totally going to find something better; there's a ton of options, synth pads for example. Anyway.
The second piece we have is Libaudioverse, my library for 3D audio and synthesis. You can find it on GitHub. I'd be lying if I said it was perfect but, compaerd to the competition, I'm leagues ahead. Most other stuff is capable of loading a file and sending it to the speakers, but what we need is something that can represent arbitrary synthesis algorithms and run them efficiently and with low latency. I win on all thre of these counts.
Also, I would be lying by omition if I didn't point out that of course I want to use my own library. There's good reason for it here, though. In addition to the advantages Libaudioverse brings to the table, I have full control over and know how to add new components to it. Extending it is probably going to end up being important.
Finally, the third piece we have: science stacks we can integrate with. I certainly won't be writing my own, not for $25000. The Numpy/Scipy/Sympy stack is all open source and works fully save for graphing components. The Julia science stack can call Python code, but failing that it can call C code. Gnu Octave can call external processes for plotting. NVDA can get data out of excel charts. Blind people can use all of these tools, and each is powerful in the domains where it is used.
So What's the Plan?
First, I'll get a mailing list up and facilitate discussion. I know that I need to solve problems for digital signal processing, machine learning, and other general programmer sorts of things. Programming isn't the only science out there, and I'm not the only person who wishes tools existed. What I want to do is get as many use cases as possible, then go through them and find the commonalities.
While that's happening, I solve the following core problems that will show up in every domain:
-
No good library exists for soundscapes and/or sonification. We need a number of small sonifiers that can be linked into bigger constructs. Examples include lines, one-off events, clicks, etc. all designed to be fed mathematical data.
-
No tool exists to graph arrays of ordered pairs of real numbers. This will look a lot like the current graphing calculators, except that it's designed to be used as a library and fed data from scientific simulation scripts.
-
No tool exists that can handle complex numbers. As far as I am aware, no one has tried to solve this one. I have some ideas, but a sonification is worth a thousand words in this case.
-
No tool exists to work with scalar functions and/or data of two variables (that is things such as
f(x, y) = xy
). This will probably involve a UI that allows one to "feel" it using a touchscreen; there may be a way to do it without one, but I have not yet had a workable idea in this regard. -
No tool exists to monitor realtime changes in parameters, i.e. measuring how much different layers in a neural network change.
-
While reading off lists of numbers technically works, no tools exist to allow blind people to work out trends or relationships when dealing with discrete data such as bar and pie graphs, scatter plots, etc.
-
This all needs to be integrated to some degree with current science stacks and given a workable UI, while still maintaining the ability to output to other places such as wave files on disk whereever possible.
As I solve these, I'll write tutorials and make example audio. It isn't enough to have tools; people need to know how to use them effectively. It would be nice if general knowledge gained by the sighted about different sorts of visualizations transferred, but we're completely in uncharted waters here. I suspect they will be very different, in the same way that screen readers are significantly different from a mouse.
Finally, throughout this process and with whatever time and money is left over, I'll incorporate whatever came up on the mailing list and/or in other discussions, starting with whichever ideas seem most likely to benefit the largest number of people. I am certain that what I am discussing can be done within the $25000 budget, but it is the nature of software development and research that it's hard to say for certain how much will be left at the end.
Then, a whole bunch of blind people who otherwise couldn't, can.
But Why Me? Why not Someone Else?
If you're still reading then you may be asking yourself why I think I'm qualified. It's easy for me to prove I'm a good programmer; I've got a number of rather large pull requests against the Rust compiler, Libaudioverse, etc. I was able to bootstrap my DSP knowledge to the point of writing a library for audio synthesis on my own as a blind person by learning LaTeX and finding one sort-of-accessible textbook. When I think I can do a project, i'm almost always right.
But that's not actually my most important qualification. My most important qualification is that I am blind and need the tools.
One thing that comes up over and over in my discussions with sighted people is this: in much the same way that a blind person isn't going to understand color schemes or fonts or good UI layout or what-have-you, a sighted person doesn't understand the world of the blind. You'd never ask a blind person to develop the next matplotlib. For much the same reason, I would never ask a sighted person to develop these tools. Someone who can only academically understand isn't going to make something that works well. And we really need this to work well, especially since getting the funding to redo something is harder than getting the funding to do it in the first place.
If you're still here and you haven't, signal boost this. Winning the Holman Prize will give me the means to make a difference to a whole lot of people.