Camlorn_audio 0.1

Camlorn_audio is now in a useable form; I am consequently releasing it. It is here.

Note that it's definitely not ready for production use. There are bugs. There are probably crippling bugs, crash bugs, and just about any other type of bug you care to think of.

It will try to automatically set up HRTF. The examples included in the zip are not using a sound that shows it off most usefully. I will be fixing this in future. If you want to know for sure that HRTF is working, add should_use_hrtf = False to the init_camlorn_audio call in the examples. It should sound different and dramatically worse. If you are using a surround system (which should work, but I can't test), HRTF should be failing, but it should use the surround system properly. Also, it is odd but true at least for me: the HRTF effect will be more pronounced the longer you use it, and even more pronounced if you're actually controlling something.

There are still some missing features that I intend to add shortly. This is a very early release compared to what it will be.

Currently, this is Windows only. It is close to working on other platforms, but I am more concerned about features. If someone gets it to work on their platform, let me know-I probably want a diff.

See the readme for contact info. if you find a bug or have a question.

What's Still Needed for Camlorn_audio

This is just a quick update on what's actually left. If someone wants to try to build Camlorn_audio, please feel free to do so. The build instructions should be up to date.

  • First, getting rid of my fork of OpenAL Soft. By default, OpenAL Soft looks for an ini that can override a bunch of stuff, and there are a number of limits that need to be higher in order for it to actually work out well. By default, you only get 255 sources, only 4 auxiliary sends (which basically means effects), and HRTF is not configured. I have a fork that disables the ini loading and provides a method to set these things, but moving forward and tracking the actual OpenAL Soft is annoying at best.

  • Second, the rest of the EFX effects need binding. This needs a lot of manual typing, but shouldn't be too difficult.

  • Third, sound cloning. This is partially done. The idea is that you make a sound with the parameters you want and call clone on it to get another, identical one.

  • Fourth, one-off fire-and-forget playing. This will probably be a class that looks identical to a Sound3D, but with disabled (or no-op) stop and pause methods. The inheritance hierarchy and the fact that this is C++ make it hard to actually remove it on only one class. Alternatively, it may be possible to make this a special method on Sound3D, especially when the sound cloning functionality works.

  • Finally, docs. This is going to be the hardest part. A lot of difficult things need explaining. The API is fine, but the 50 parameters on EAX Reverb all need something at least, and I choose to not count "Go check the OpenAL Spec".

When will it be out? Hopefully soon. It got delayed in favor of some other projects, and I don't promise that it won't again. It is coming, and is currently usable with some dark corners. There will be binaries when I am ready for wider distribution and support.

Using and Abusing the C++ Preprocessor in Camlorn_audio

The C++ preprocessor is both simple and complex. It can be incredibly useful, and every C+ program uses it. Few go beyond the basics: #include is common, and #define is something that most C++ programmers will know about.

I have gone further. In this post, I will present the best and worst C++ code I have written to date: auto-implementing functions. I would consider very carefully before doing it this way again, but it is effective and reduced the size of the Camlorn_audio C bindings by an order of magnitude.

Anyhow, the code:

Read more…

A Camlorn_audio Demo

I finally have a tech demo for Camlorn_audio. It's here. As that recording shows, I also have Reverb, Eax Reverb and Echo. I'm going to implement stuff to make them easy to use at some point. For now, you have to design your own, but I think I'm going to write something that can load them from a file so you can share them with others and the like.

The program to make that demo, including window and keyboard handling, is 95 lines. It uses camlorn_audio and SDL only, and basically allows you to move and manipulate sounds with they keyboard.

Things that are possibly coming in no particular order: streaming, including the ability to stream from the internet. The ability to assign events to sounds so that you can have a function called when important things play, i.e. from a cut scene. Some dynamic environment simulators that change reverb parameters in realtime to simulate hallways and rooms and the like. More things as I think of them, and obviously more examples. Probably a tutorial of some sort very soon, but definitely at some point in the near future.

Current State of Camlorn_audio

I've been working a lot on Camlorn_audio recently, and am hopefully getting close to something I can release. I thought I'd post about what it's able to do at the moment, what it needs still, and what you can do if you want to use it.

Current Status

Currently, camlorn_audio can play sounds in 3d and apply echo. This sounds like barely anything, but it can do so--with a bit of setup--in high quality, approaching pre-vista Direct Sound. It provides access to a great number of Creative products and anything supporting Open AL, and takes care of a ton of low-level infrastructure and details. Here's the four lines to get a sound playing. This is not quite a complete example--you need a method of keeping the program open long enough to actually hear it, and some includes and the like.

Context c;

Sound3d s(c);


It can load any file format supported by Libsndfile. it will never be able to load MP3, but it can be passed a vector of audio data; if you have a license for the MP3 codec, you can load them yourself and pass the data to Camlorn_audio. I do not want to get involved in the awful horror that is MP3 licensing.

It also has some basic support for EFX. Currently, only echo is in place. More effects will be added shortly. The low-level infrastructure for EFX works but needs improvement; if you attempt to use that part of the library, expect things to crash rather easily. I intend to go over it with a fine-toothed comb, and get it hammered into something more useable and less arcane shortly. A lot of things don't work as expected.

Everything is documented to some extent and there is a readme. It needs tutorials, but most of the code has documentation comments using Doxygen. I intend to write some tutorials and expand the readme a great deal before making a binary release.

Finally, due to popular demand, there is an as-of-yet not complete c API.

I want to use it. What do I do?

Currently, building Camlorn_audio yourself takes a bit of know-how. You need to obtain the git repository, which is currently undergoing frequent changes. You can find the Bitbucket page here.

You also need Doxygen, Boost (1.53 is what I use for testing; earlier may or may not work), the Open AL SDK, Libsndfile, Visual Studio 2010 (or possibly 2012) and possibly Open AL Soft. There are links and build instructions in the readme; the easiest method is to obtain Doxygen from here, and do the following at the command prompt:

cd dir_of_the_repository\library

doxygen doxyfile

Which will give you a set of documentation in dir_of_the_repository\docs. Hitting dir_of_the_repository\docs\html\index.html will bring up the introduction, which includes build instructions and some explanation.

Finally, some examples exist in examples/, including how to use what currently exists of EFX and how to load sound from files. These examples are automatically built with the library.

The Recent Downtime, Lessons About Backups, and the status of Camlorn_audio

Firstly, I can definitely set up a web server in less than half the time it took to do it the first time. Secondly, the reason I know this is because I am now on a different VPS.

So, what happened. Apparently, the VPS control panel used by Chicago VPS, Solus, had a vulnerability. A major one. Someone got high level access to the Atlanta node and began downloading all the databases, deleting as they went.

This, in and of itself, isn't bad. What is bad is that it took Chicago VPS 2 days to even get my VPS back to me, and they couldn't restore from backups. Apparently, restoring from backups takes manpower. Apparently, it takes enough manpower that one person is evidently not enough. Apparently, this is so much manpower that it is possible to not have enough people on staff to actually do it.

Not only did it take two days to get me a VPS again, the control panel is still down and said VPS is wiped clean. Ironically enough, my bill was due this Sunday. I've cancelled my account and moved to Bhost, which I've heard all sorts of good things about. Hopefully this won't happen again. I don't have the three announcements they posted, but it was clear from the wording that Chicago VPS wasn't prepared for this eventuality.

As a consequence of all this, I am going to begin weekly backups as soon as possible. I wasn't doing this before. Obviously, I will be in future. It wouldn't have helped much; I also need a weekly database dump for wordpress. I will be doing this as well. Once I've formulated a pair of backup scripts, I'll post them here with directions--backing up a Linux VPS to Windows has some peculiarities and oddities.

There are in fact about 5 missing blog posts, and I'm not going to take the time to type them over from Google cache.

In brighter news, Camlorn_audio is now over on Bitbucket. This may have some accessibility issues, and if it does I'll take the time to move it back to my personal server. Having your Git repository go down with your web site is not conducive to getting work done. Since I've never heard of Bitbucket being taken down for 4 days via a hack, I think it will ultimately be a better solution. That, and I can get an issue tracker and a wiki by just checking a box. The link from here to there will be up sometime soon.

And the brightest news: the reason I found out my web site was down is because I was going to make a nice and lengthy post about Camlorn_audio, how it's useable, and how close I am to some sort of release. Said post was going to include all sorts of stuff about how you can build it yourself if you want. I could release a compiled version, but I want Streaming and EFX support before I do so. I intend to implement those two features, document it, and release 1.0 soonish. I'll write said detailed post later, but that's where it stands in a nutshell.

A Basic OpenAL Sample

I've just created and uploaded a basic OpenAL sample. It plays a sound (it was supposed to be a c-major chord, but what I ended up with works better, explanation below) that moves in a spiral around the listener's head, in a counterclockwise direction. It is commented more than the actual sourcecode, as it is intended for someone new to the technology to follow.

See the readme for directions, and beware for it may fail for mysterious reasons. It shouldn't, but may. I am not responsible for any failures, damages, or the like in any way, sorry. In all seriousness, the chances of this actually breaking something are very, very slim, but you have been warned--it's sad that we have to put this disclaimer on word processors, to be honest, but there you have it.

OpenAlSoft is LGPL. I'm placing this code in the public domain, because it's not really production quality and is basically pointless.

Link: here

And a quick explanation of why normal sounds sound better, at least as I understand it: the hrtf data sets work by convolving two signals. This basically means that hrtfs cleverly modify the component frequencies. You'll be almost entirely unable to notice it with a pure sine wave, as that has only one component frequency and basically overwhelms everything else, but real sounds have hundreds of component frequencies; basic synthesized speech starts with 8 and 39 additional parameters, for example, and real-world sounds have a whole lot more than that.

I do not intend to support this. If there's demand, I might update it and get it working better, but I do not intend to support this. It has no abstraction and is basically an ugly let's learn the technology hack. Half of it should be abstracted out into helper functions at the least, and I will have to wrap most of that functionality in a sound class of some sort. If you get use out of it that's great, but I have no plans to update this.

Some notes on progress

So, I haven't posted in a bit, and figured it was time I did. Here's where i stand.

First, and most importantly, I've managed to compile NVDA from source. This is important for quite a few reasons; most notably that one of my upcoming projects may possibly be to implement some support for Skype. We shall see. I, more generally, wished to be able to talk about it competently and take a look when things break.

Secondly, I've decided on a toolchain. I found WinDBG, which frees me from Visual Studio (at last), and I'm going to either use CMake or Scons. I was looking at MinGW, but it can't make a 64-bit executable and it can't interface with SAPI. Unfortunately, I have yet to find a text editor; I'm very close to rolling my own.

Finally, I've had ideas. First is a roguelike--there's nothing new or exciting here, it'd play a bit like swamp, and be random of course. Not very exciting technologically, but probably fun to play. The second is still in the beta phase--it's not a full idea yet--but it involves Voxels, and if I coded it, it might be the first 3d platformer for the blind. I've got a thread over on asking for thoughts on it. It's far from fleshed out, and I can't even imagine how I'd make a workable level editor for it.

Where from here?

So, what's next. I just started us off with a post about setting up a web server with almost no knowledge (I now have knowledge--that's why i didn't go with a managed solution of some sort). There's no real defined topic yet, so I'd like to go ahead and do that here.

First and foremost, I intend to shortly start learning OpenAL--I suspect that the basics aren't really that hard--in the preparation for the creation of audiogames. For those who don't know, these are thought of as "Games for the blind", though that description has proven itself to be a lie--Papa Sangre was an audiogame and, the problems I had aside [ref]The game attempts to appeal to the sighted and blind players are a minority, or at least that was the intent, so it's far, far too easy for me[/ref], was popular, so far as I can tell, among the sighted as well. I don't have a link to that at the moment.

I'll probably also post stuff about books, and intend to blog from the Guide Dog school this summer when I get my next guide dog. I'll probably compile some useful info of some sort later, related to guide dogs and blindness in addition to the primary topic of programming. I've considered opening up the doors to those who want to ask me specific questions about being a blind college student, but this is a ways off yet.

And, for something interesting, which everyone who likes sci-fi should read--we can't have these boring blog posts without a random link after all: Fine Structure. That's free, by the way.

the Path to a Working Web Server

I wasn't sure what I was going to blog about for this first post, as the classic "I am..." post seemed really boring.  I don't want to read about me; perhaps you do, but no matter. The following is long, skimming may be worthwhile.

Completely at a loss for what to post, I was afraid that my fate would be crawling old threads--something along the lines of a Google search: ideas for my first blog post.  Then I started setting up the server, and it turns out that that in and of itself is worth an entire post (possibly multiple posts, but I'd rather focus on other things).  I went from trivial Linux knowledge, enough to use gcc and make, to a functioning web server in about three weeks.  I could have done it in four or five days, but I'm also in college pursuing a degree in computer science, and had to work around classes and generally do it in those times during which I had mental energy.  The only other thing of interest--I hesitate to even mention it (not really, but I haven't found the sarcasm font yet)--is that I'm blind.  This blog is going to eventually be about programming, probably programming audiogames, but this makes a nice first post.  Anyhow, the technicalities and difficulties of setting up a web server:

I decided to go with a vps.  At that time, I had know knowledge of which vps provider I wanted, nor what I would need.  I would dwell on this point, but there's more interesting things ahead.  After an afternoon of what amounts to window shopping and several discussions on Alter Aeon and a private Facebook group I decided on Chicago VPS. I'll probably upgrade sort of soonish. We shall see. It's currently running Debian 6.

VPS in hand, it's time to decide on the webserver. Following even more discussion--this is still the first day--I decided on Nginx. This is where the problems start.

Nginx in hand, it's time to look at octopress. As evidenced by this blog, I am not using Octopress. This is where my blindness comes into play. Octopress is great if you know markdown or are willing to learn it and is most definitely faster than what I'm doing right now--wordpress html editor. The problem is quite simple: Octopress is blog only. I am unable to make websites that look good in pure HTML for the quite simple reason that I can't see them. Screen readers will report invisible elements no problem. Screen readers will happily report text that is the same color as the window. Screen readers have no problem with 20 overlapping links, none at all. An entire afternoon later, and several problems installing the correct versions of ruby to run it, I realize this. Back to the drawing board. Also, back to classes for a week. This is going to eventually have non-blog pages, so it is important.

Great, I thought, what else works. I know, wordpress. That's everywhere these days. Let's use WordPress. Mmm, time to set up php. This sounds innocent. This is where I discover that it is not. I could go on about how I tried many, many times to get it working, but I'm' going to cut to the chase. The Nginx documentation wiki is out of date. if anyone else has to do this, here's the starting point:

location ~ \.php$ {
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;

fastcgi_param  QUERY_STRING     $query_string;

try_files $uri = 404;

fastcgi_index index.php;

include fastcgi_params;


Unfortunately, i don't yet have WordPress configured for code, so that's probably going to render incorrectly--copy/pasting it may or may not work, who knows. It's also probably not the most secure setup for php.

After that, it gets anticlimactic for a while. Everything ran smoothly, and I'm like, time for word press. unfortunately not.

Time to talk about iptables. Iptables was written by someone who designs things without thought to users. I don't mean that it's poorly written, quite the contrary, it's just that the person who designed it designed it for computers, not humans. The VPS running this is on OpenVZ, so I get iptables or nothing. Realizing that I'm open to all sorts of attacks without iptables, it's time to go get that working.

I'm not even going to try to describe iptables here. Consider this, though: iptables -A INPUT -p tcp --dport ssh -j ACCEPT. It is case sensitive. That is almost certainly not correct and is quite literally the simplest example. It gets worse from there, quickly, and there's all sorts of cryptic errors and such. I'm on a VPS, and only just discovered how to use the serial console--at least, with a screen reader--yesterday, so one mistake locks me out of the server forever. For those who need to do this via ssh, the above should be the first iptables command you type, ever. Be warned that it does not persist by default--on debian you want the iptables-persist package, or maybe it's iptables-persistent. I had bigger issues than iptables won't save, but one of those does the trick, if you do some googling and find out that the magic file you have to save the rules to is /etc/iptables/rules. I'm not going to go into an iptables tutorial, at least not here. I'm still not completely confident that iptbles is set up properly and am almost certain there's a security hole somewhere, and at least one of the rules I needed required that I go learn what happens during a tcp handshake.

Finally, we reach the part that I thought would be hard. I was completely convinced that the hardest part would be setting up the mysql database and getting wordpress going. It's not. It's surprisingly easy in fact, especially compared to iptables. You just do a few things, and the WordPress instructions, surprisingly, actually work as advertised.

And so we come to the final pair of issues, and the easiest to resolve. WordPress wants to run as the same user as the web server and wishes to have ownership of its files. This fixes updating and installing plugins, as no one enables ftp these days (if you have, go learn about sfpt and get winscp). The other issue was the HTML editor only issue that seems to be hitting a lot of people. I fixed this, in my case, with an obscure configuration directive that's not documented anywhere

Anyhow, that's it for the web server and brings us to this evening. I would write about how I started the post by pressing the new post button, but that would lead to infinite recursion and, more importantly, boredom. I'll post about projects I plan to start soon, and intend to get some code samples and a git repository going in the near future. I plan to release some small utility stuff as blog posts with explanation, and might from time to time write about c++ language features and the like. We shall see.