CloudWatch is a great concept — super-easy to configure and inexpensive. And at first glance, it actually looks pretty nice. But after I spent about 30 minutes with it, I realized it wasn’t easy to use. The units used are especially hard to interpret. This is my best attempt to explain what the network values mean.
For a number of years, we have streamed HLS video via CloudFront, using a
Wowza Streaming Engine server to convert our RTMP streams to HLS on the fly. CloudFront
provides almost infinite scalability for the HLS stream, since the static chunk files are
For high availability purposes, we want to use two independent WSE servers in two AWS
availability zones. But this has been problematic. The two servers are never 100% in
sync with their HLS chunking of the incoming live stream. This can cause the client
to get a bad response to a request, thereby dropping the live stream.
After a lot of experimentation, I have come up with a way to assemble a multi-AZ, high
availability cluster of WSE servers that can reliably stream HLS video from an incoming
Here are the bands I’m interested in listening more to as Lolla gets closer. Bands marked with an asterisk are those that I know I already want to see (sadly, I’m sure there will be conflicts!)
I’ve been lucky enough to go to Lollapalooza for the past 5 years. I really like to do my homework before I go so I know who I want to see. There are always tons of bands I’ve never heard of, and every year, some of them end up being my favorites at the show.
So I build Spotify playlists of every band at Lolla, using recent setlists from setlist.fm whenever it is available. A lot of work goes into this, and I’d like for people to have the chance to use these playlists.
In our AWS migration, we found it necessary to run an FTP server. Yeah, I know — “FTP? In the 20-teens?”. Look, I get it — nobody wants to run an FTP server in this day and age. But it is still a convenient way for partner companies to transfer data to us via automation. This isn’t highly sensitive data; our main concern is keeping the FTP server isolated from our other services so that any vulnerabilities there don’t propagate to more critical systems.
At any rate, we found it surprisingly challenging to build a highly available FTP service in AWS.
After a lot of reading about AWS and the failures that have happened over the years, I’ve come to the conclusion that to be truly resilient against complete AZ failure, you need to have enough capacity running in both AZs to handle the entire load of your application.
Recently, I tried to upgrade my old OpenELEC 3.2.4 system to LibreELEC 8, which ships with Kodi 17. Things did not go well.
We are in the middle of a massive migration to the AWS cloud. While we are excited by the prospects of ditching a lot of our hardware responsibilities, you can’t make a change this big without some pain.
So far, Snowball has been the biggest source of frustration.
I’ve used Picasa for over 10 years to manage my family’s photo library. We have about 50,000 images in there with 22,000 tags, stars, album memberships, etc. Now that Picasa will no longer be supported by Google, I had to find a replacement. And it really hasn’t been easy. I thought I’d share my strategy for anybody who might be in a similar situation.
We’re in the process of building out some new Linux-based video encoders, and we want to output to a LOT of different destinations: live streams, archived versions on disk, high-quality versions for future editing, JPEG stills, etc.
QuickSync is a great way to get more out of our processors by offloading the encoding to the GPU. To figure out what architecture to invest in, we ran some tests with a Broadwell processor, the Core i7 5775C (3.3 GHz), and a Skylake processor, the Core i7-6700K (4.0 GHz).