Shared posts

11 Jul 10:35

GoPro officially adds webcam support to its action cameras

by Staff

The cameras get a much-needed WFH upgrade.

GoPro knows that its cameras probably aren't getting as much action these days but if there's one thing it can probably do exceptionally well while many of us still WFH is be a webcam. The company has added official support to enable webcam support and if you've got the new HERO8, all you'll need is a single USB-C cable. The software is compatible with the HERO8/7/6/5/4 Black models and it's currently only available on macOS with Windows support to come in the future. Oh, and don't forget about that wide viewing angle so be sure to clean up before you get on that Zoom call. 

gopro.com

21 Aug 05:55

MongoDB foreground index builds

This one is explored in depth over at nelhagedebugsshit.tumblr.com.

MongoDB has two different approaches to building indexes, selectable by the operator at index-creation time.

  • “foreground” index builds lock the the entire server for the duration of the build, but promise faster builds and more compact indexes, since they can look at all of the data up front.
  • “background” builds are slower and lead to more disk fragmentation, but allow other queries (reads and writes) to procede during the index build, by building the B-Tree in parallel with ongoing operations, and merging new writes into the index-in-progress.

It’s a fine theory. But until MongoDB 2.6, foreground index builds were actually dramatically slower on large collections!

It turns out that background builds did the obvious naïve B-Tree construction, of just creating an empty B-Tree and inserting each record one at a time. This approach has some weaknesses, but it’s pretty clearly O(n*log n): O(n) inserts, each of which is an O(log n) tree insert.

Foreground builds, able to stop the world, tried to be clever by doing an external-memory sort, and then building the B-Tree in place on top of the sorted records. This approach is in fact much faster if implemented correctly, since it reduces disk I/O (even though it will also be O(n*log n) comparisons asymptotically – you’re still doing a comparison sort, after all).

However, the MongoDB 2.4 and earlier external sort misses this goal! The sort divides the N records into (N/k) chunks, for a roughly-constant k, sorts each chunk, and then merges them by repeatedly scanning each chunk for the global minimum. But for constant k, N/k = O(N), and so it ends up doing N * O(N) comparisons, for O(N²) work. As long as N is small enough that N/k is small, it does beat out the incremental B-Tree build, but somewhere around 1M records it tips over, and gets rapidly slower.

Mongo 2.6 fixes this by using a min-heap to do the merge, restoring O(n*log n) asymptotic performance, while retaining the I/O efficiency of the external sort. Hooray.