Shared posts

26 Jun 22:30

Shaky software causes huge spike in bank outages: RBA

Ageing, complex systems "pose a significant risk".
06 Mar 21:45

Hump Day Deals: 75% Off Hotels, EB Games Sale, 40% Off Dyson Vacuums

by Chris Jager
Here are today's best deals for Hump Day, including: $277 off Samsung's Galaxy S10+, a massive 40% off Dyson vacuums, EB Games' bestselling games clearance sale, $2 McDOnald's Quarter Pounders and heaps more! More »
   
 
 
06 Mar 21:45

How To Make A Cake Look Pretty If You Suck At Decorating

by Claire Lower
I can make a delicious tasting cake but — when it comes to decorating — I am an episode of Nailed It! come to life. I suspect my problems lie mostly in my lack of patience, but it’s worth noting that I am incapable of drawing most stick figures, and it may be an issue of a complete lack of artistic talent. More »
   
 
 
27 Oct 22:10

CQRS/MediatR implementation patterns

by Jimmy Bogard
Early on in the CQRS/ES days, I saw a lot of questions on modeling problems with event sourcing. Specifically, trying to fit every square modeling problem into the round hole of event sourcing. This isn’t anything against event sourcing, but … Continue reading →
19 Mar 23:41

Baby’s First Crocheted Yoda Outfit

by Andrew Salomone

baby-yoda-crochet-1Since we’re smack-dab in the middle of this year’s National Crochet Week, it seems like a good time take a gander at this handsome crocheted Yoda outfit that Redditor and expectant parent dishevelledmind made for their future little Yoda! [contextly_sidebar id=”6x8X7sbnoYmLXvSI9A086iZo6bk0IPCT”] And, if you’re looking for some crocheted Yoda wear […]

Read more on MAKE

The post Baby’s First Crocheted Yoda Outfit appeared first on Make:.

09 Mar 00:06

How we upgrade a live data center

by Nick Craver

A few weeks ago we upgraded a lot of the core infrastructure in our New York (okay, it’s really in New Jersey now – but don’t tell anyone) data center. We love being open with everything we do (including infrastructure), and really consider it one of the best job perks we have. So here’s how and why we upgrade a data center. First, take a moment to look at what Stack Overflow started as. It’s 5 years later and hardware has come a long way.

Why?

Up until 2 months ago, we hadn’t replaced any servers since upgrading from the original Stack Overflow web stack. There just hasn’t been a need since we first moved to the New York data center (Oct 23rd, 2010 – over 4 years ago).  We’re always reorganizing, tuning, checking allocations, and generally optimizing code and infrastructure wherever we can. We mostly do this for page load performance; the lower CPU and memory usage on the web tier is usually a (welcomed) side-effect.

So what happened? We had a meetup. All of the Stack Exchange engineering staff got together at our Denver office in October last year and we made some decisions. One of those decisions was what to do about infrastructure hardware from a lifecycle and financial standpoint. We decided that from here on out: hardware is good for approximately 4 years. After that we will: retire it, replace it, or make an exception and extend the warranty on it. This lets us simplify a great many things from a management perspective, for example: we limit ourselves to 2 generations of servers at any given time and we aren’t in the warranty renewal business except for exceptions. We can order all hardware up front with the simple goal of 4 years of life and with a 4 year warranty.

Why 4 years? It seems pretty arbitrary. Spoiler alert: it is. We were running on 4 year old hardware at the time and it worked out pretty well so far. Seriously, that’s it: do what works for you. Most companies depreciate hardware across 3 years, making questions like “what do we do with the old servers?” much easier. For those unfamiliar, depreciated hardware effectively means “off the books.” We could re-purpose it outside production, donate it, let employees go nuts, etc. If you haven’t heard, we raised a little money recently. While the final amounts weren’t decided when we were at the company meetup in Denver, we did know that we wanted to make 2015 an investment year and beef up hardware for the next 4.

Over the next 2 months, we evaluated what was over 4 years old and what was getting close. It turns out almost all of our Dell 11th generation hardware (including the web tier) fits these criteria – so it made a lot of sense to replace the entire generation and eliminate a slew of management-specific issues with it. Managing just 12th and 13th generation hardware and software makes life a lot easier – and the 12th generation hardware will be mostly software upgradable to near equivalency to 13th gen around April 2015.

What Got Love

In those 2 months, we realized we were running on a lot of old servers (most of them from May 2010):

  • Web Tier (11 servers)
  • Redis Servers (2 servers)
  • Second SQL Cluster (3 servers – 1 in Oregon)
  • File Server
  • Utility Server
  • VM Servers (5 servers)
  • Tag Engine Servers (2 servers)
  • SQL Log Database

We also could use some more space, so let’s add on:

  • An additional SAN
  • An additional DAS for the backup server

That’s a lot of servers getting replaced. How many? This many: Greg Bray, and a lot of old servers.

The Upgrade

I know what you’re thinking: “Nick, how do you go about making such a fancy pile of servers?” I’m glad you asked. Here’s how a Stack Exchange infrastructure upgrade happens in the live data center. We chose not to failover for this upgrade; instead we used multiple points of redundancy in the live data center to upgrade it while all traffic was flowing from there.

Day -3 (Thursday, Jan 22nd): Our upgrade plan was finished (this took about 1.5 days total), including everything we could think of. We had limited time on-site, so to make the best of that we itemized and planned all the upgrades in advance (most of them successfully, read on). You can find a read the full upgrade plan here.

Day 0 (Sunday, Jan 25th): The on-site sysadmins for this upgrade were George Beech, Greg Bray, and Nick Craver (note: several remote sysadmins were heavily involved in this upgrade as well: Geoff Dalgas online from Corvallis, OR, Shane Madden, online from Denver, CO, and Tom Limoncelli who helped a ton with the planning online from New Jersey). Shortly before flying in we got some unsettling news about the weather. We packed our snow gear and headed to New York.

Day 1 (Monday, Jan 26th): While our office is in lower Manhattan, the data center is now located in Jersey City across the Hudson river: Office to the Data Center We knew there was a lot to get done in the time we had allotted in New York, weather or not. The thought was that if we skipped Monday we likely couldn’t get back to the data center Tuesday if the PATH (mass transit to New Jersey) shut down. This did end up happening. The team decision was: go time. We got overnight gear then headed to the data center. Here’s what was there waiting to be installed:

Web, Redis, and Service serversNew 10Gb network gearFX2s Blade Chassis for VMs

Yeah, we were pretty excited too. Before we got started with the server upgrade though, we first had to fix a critical issue with the redis servers supporting the launching-in-24-hours Targeted Job Ads. These machines were originally for Cassandra (we broke that data store), then Elasticsearch (broke that too), and eventually redis. Curious? Jason Punyon and Kevin Montrose have an excellent blog series on Providence, you can find Punyon’s post on what broke with each data store here.

The data drives we ordered for these then-redundant systems were the Samsung 840 Pro drives which turned out to have a critical firmware bug. This was causing our server-to-server copies across dual 10Gb network connections to top out around 12MB/s (ouch). Given the hundreds of gigs of memory in these redis instances, that doesn’t really work. So we needed to upgrade the firmware on these drives to restore performance. This needed to be online, letting the RAID 10 arrays rebuild as we went. Since you can’t really upgrade firmware over most USB interfaces, we tore apart this poor, poor little desktop to do our bidding:

Once that was kicked off, it ran in parallel with other work (since RAID 10s with data take tens of minutes to rebuild, even with SSDs). The end result was much improved 100-200MB/s file copies (we’ll see what new bottleneck we’re hitting soon – still lots of tuning to do). Now the fun begins. In Rack C (we have high respect for our racks, they get title casing), we wanted to move from the existing SFP+ 10Gb connectivity combined with 1Gb uplinks for everything else to a single dual 10Gb BASE-T (RJ45 connector) copper solution. This is for a few reasons: The SFP+ cabling we use is called twinaxial which is harder to work with in cable arms, has unpredictable girth when ordered, and can’t easily be gotten natively in the network daughter cards for these Dell servers. The SFP+ FEXes also don’t allow us to connect any 1Gb BASE-T items that we may have (though that doesn’t apply in this rack, it does when making it a standard across all racks like with our load balancers). So here’s what we started with in Rack C:

What we want to end up with is:

The plan was to simplify network config, cabling, overall variety, and save 4U in the process. Here’s what the top of the rack looked like when we started: …and the middle (cable management covers already off):

Let’s get started. First, we wanted the KVMs online while working so we, ummm, “temporarily relocated” them: Now that those are out of the way, it’s time to drop the existing SFP+ FEXes down as low as we could to install the new 10Gb BASE-T FEXes in their final home up top: The nature of how the Nexus Fabric Extenders work allows us to allocate between 1 and 8 uplinks to each FEX. This means we can unplug 4 ports from each FEX without any network interruption, take the 4 we find dead in the VPC (virtual port channel) out of the VPC and assign them to the new FEX. So we go from 8/0 to 4/4 to 0/8 overall as we move from old to new through the upgrade. Here’s the middle step of that process: With the new network in place, we can start replacing some servers. We yanked several old servers already, one we virtualized and 2 we didn’t need anymore. Combine this with evacuating our NY-VM01 & NY-VM02 hosts and we’ve made 5U of space through the rack. On top of NY-VM01&02 was 1 of the 1Gb FEXes and 1U of cable management. Luckily for us, everything is plugged into both FEXes and we could rip one out early. This means we could spin up the new VM infrastructure faster than we had planned. Yep, we’re already changing THE PLAN™. That’s how it goes. What are we replacing those aging VM servers with? I’m glad you asked. These bad boys:

There are 2 of these Dell PowerEdge FX2s Blade Chassis each with 2 FC630 blades. Each blade has dual Intel E5-2698v3 18-core processors and 768GB of RAM (and that’s only half capacity). Each chassis has 80Gbps of uplink capacity as well via the dual 4x 10Gb IOA modules. Here they are installed:

The split with 2 half-full chassis give us 2 things: capacity to expand by double, and avoiding any single points of failure with the VM hosts. That was easy, right? Well what we didn’t plan on was the network portion of the day, it turns out those IO Aggregators in the back are pretty much full switches with 4 external 10Gbps ports and 8 internal 10Gbps (2 per blade) ports each. Once we figured out what they could and couldn’t do, we got the bonding in place and the new hosts spun up.

It’s important to note here it wasn’t any of the guys in the data center spinning up this VM architecture after the network was live. We’re setup so that Shane Madden was able to do all this remotely. Once he had the new NY-VM01 & 02 online (now blades), we migrated all VMs over to those 2 hosts and were able to rip out the old NY-VM03-05 servers to make more room. As we ripped things out, Shane was able to spin up the last 2 blades and bring our new beasts fully online. The net result of this upgrade was substantially more CPU and memory (from 528GB to 3,072GB overall) as well as network connectivity. The old hosts each had 4x 1Gb (trunk) for most access and 2x 10Gb for iSCSI access to the SAN. The new blade hosts each have 20Gb of trunk access to all networks to split as they need.

But we’re not done yet. Here’s the new EqualLogic PS6210 SAN that went in below (that’s NY-LOGSQL01 further below going in as well):

VM Servers, SAN, and NY-LOGSQL01 Our old SAN was a PS6200 with 24x 900GB 10k drives and SFP+ only. This is a newer 10Gb BASE-T 24x 1.2TB 10k version with more speed, more space, and the ability to go active/active with the existing SAN. Along the the SAN we also installed this new NY-LOGSQL01 server (replacing an aging Dell R510 never designed to be a SQL server – it was purchased as a NAS):

The additional space freed by the other VM hosts let us install a new file and utility server:

Of note here: the NY-UTIL02 utility server has a lot of drive bays so we could install 8x Samsung 840 Pros in a RAID 0 in order to restore and test the SQL backups we make every night. It’s RAID 0 for space because all of the data is literally loaded from scratch nightly – there’s nothing to lose. An important lesson we learned last year was that the 840 Pros do not have capacitors in there and power loss will cause data loss if they’re active since they have a bit of DIMM for write cache on board. Given this info – we opted to stick some Intel S3700 800GB drives we had from the production SQL server upgrades into our NY-DEVSQL01 box and move the less resilient 840s to this restore server where it really doesn’t matter.

Okay, let’s snap back to blizzard reality. At this point mass transit had shut down and all hotels in (blizzard) walking distance were booked solid. Though we started checking accommodations as soon as we arrived on site, we had no luck finding any hotels. Though the blizzard did far less than predicted, it was still stout enough to shut everything down. So, we decided to go as late as we could and get ahead of schedule. To be clear: this was the decision of the guys on site, not management. At Stack Exchange employees are trusted to get things done, however they best perceive how to do that. It’s something we really love about this job.

If life hands you lemons, ignore those silly lemons and go install shiny new hardware instead.

This is where we have to give a shout out to our data center QTS. These guys had the office manager help us find any hotel we could, set out extra cots for us to crash on, and even ordered extra pizza and drinks so we didn’t go starving. This was all without asking – they are always fantastic and we’d recommend them to anyone looking for hosting in a heartbeat.

After getting all the VMs spun up, the SAN configured, and some additional wiring ripped out, we ended around 9:30am Tuesday morning when mass transit was spinning back up. To wrap up the long night, this was the near-heart attack we ended on, a machine locking up at: BIOS Lockup Turns out a power supply was just too awesome and needed replacing. The BIOS did successfully upgrade with the defective power supply removed and we got a replacement in before the week was done. Note: we ordered a new one rather than RMA the old one (which we did later). We keep a spare power supply for each wattage level in the data center, and try to use as few different levels as possible.

Day 2 (Tuesday, Jan 27th): We got some sleep, got some food, and arrived on site around 8pm. Starting the web tier (a rolling build out) was kicked off first:

A stack of web serversA line of web serversSame line!Inside a web server

While we rotated 3 servers at a time out for rebuilds on the new hardware, we also upgraded some existing R620 servers from 4x 1Gb network daughter cards to 2x 10Gb + 2x 1Gb NDCs. Here’s what that looks like for NY-SERVICE03:

A line of web serversSame line!Inside a web server

The web tier rebuilding gave us a chance to clean up some cabling. Remember those 2 SFP+ FEXes? They’re almost empty: The last 2 items were the old SAN and that aging R510 NAS/SQL server. This is where the first major hiccup in our plan occurred. We planned to install a 3rd PCIe card in the backup server pictured here: We knew it was a Dell R620 10 bay chassis that has 3 half-height PCIe cards. We knew it had a SAS controller for the existing DAS and a PCIe card for the SFP+ 10Gb connections it has (it’s in the network rack with the cores in which all 96 ports are 10Gb SFP+). Oh hey look at that, it’s hooked to a tape drive which required another SAS controller we forgot aboutCrap. Okay, these things happen. New plan.

We had extra 10Gb network daughter cards (NDCs) on hand, so we decided to upgrade the NDC in the backup server, remove the SFP+ PCIe card, and replace it with the new 12Gb SAS controller. We also forgot to bring the half-height mounting bracket for the new card and had to get creative with some metal snips (edit: turns out it never came with one – we feel slightly less dumb about this now). So how do we plug that new 10Gb BASE-T card into the network core? We can’t. At least not at 10Gb. Those 2 last SFP+ items in Rack C also need a home – so we decided to make a trade. The whole backup setup (including  new MD1400 DAS) just love their new Rack C home:

Then we could finally remove those SFP+ FEXes, bring those KVMs back to sanity, and clean things up in Rack C:

Those pesky hanging KVMsTop of Rack CMiddle of Rack C

See? There was a plan all along. The last item to go in Rack C for the day is NY-GIT02, our new Gitlab and TeamCity server:

Signatures from the New York devsRacked and ready to go

Note: we used to run TeamCity on Windows on NY-WEB11. Geoff Dalgas threw out the idea during the upgrade of moving it to hardware: the NY-GIT02 box. Because they are such intertwined dependencies (for which both have an offsite backup), combining them actually made sense. It gave TeamCity more power, even faster disk access (it does a lot of XML file…stuff), and made the web tier more homogenous all at the same time. It also made the downtime of NY-WEB11 (which was imminent) have far less impact. This made lots of sense, so we changed THE PLAN™ and went with it. More specifically, Dalgas went with it and set it all up, remotely from Oregon. While this is happening, Greg was fighting with a DSC install hang regarding git on our web tier: Greg losing to DSC Wow that’s a lot of red, I wonder who’s winning. And that’s Dalgas in a hangout on my laptop, hi Dalgas! Since the web tier builds were a relatively new process fighting us, we took the time to address some of the recent cabling changes. The KVMs were installed hastily not long before this because we knew a re-cable was coming. In Rack A for example we moved the top 10Gb FEX up a U to expand the cable management to 2U and added 1U of management space between the KVMs. Here’s that process:

A messy starting KVMRemoving the cable management to make roomAhhhh room!That's better, all done.

Since we had to re-cable from the 1Gb middle FEXes in Rack A & B (all 4 being removed) to the 10Gb Top-of-Rack FEXes, we moved a few things around. The CloudFlare load balancers down below the web tier at the bottom moved up to spots freed by the recently virtualized DNS servers to join the other 2 public load balancers. The removal of the 1Gb FEXes as part of our all-10Gb overhaul meant that the middle of Racks A & B had much more space available, here’s the before and after:

Web tier below a 1Gb FEXLook at all that space!

After 2 batches of web servers, cable cleanup, and network gear removal, we called it quits around 8:30am to go grab some rest. Things were moving well and we only had half the web tier, cabling, and a few other servers left to replace.

Day 3 (Wednesday, Jan 28th): We were back in the data center just before 5pm, set up and ready to go. The last non-web servers to be replaced were the redis and “service” (tag engine, elasticsearch indexing, etc.) boxes:

A look inside redisNY-REDIS01 and NY-SERVICE05 racked and ready for an OS

We have 3 tag engine boxes (purely for reload stalls and optimal concurrency, not load) and 2 redis servers in the New York data center. One of the tag engine boxes was a more-recent R620, (this one got the 10Gb upgrade earlier) and wasn’t replaced. That left NY-SERVICE04, NY-SERVICE05, NY-REDIS01 and NY-REDIS02. On the service boxes the process was pretty easy, though we did learn something interesting: if you put both of the drives from the RAID 10 OS array in an R610 into the new R630…it boots all the way into Windows 2012 without any issues. This threw us for a moment because we didn’t remember building it in the last 3 minutes. Rebuild is simple: lay down Windows 2012 R2 via our image + updates + DSC, then install the jobs they do. StackServer (from a sysadmin standpoint) is simply a windows service – and our TeamCity build handles the install and such, it’s literally just a parameter flag. These boxes also run a small IIS instance for internal services but that’s also a simple build out. The last task they do is host a DFS share, which we wanted to trim down and simplify the topology of, so we left them disabled as DFS targets and tackled that the following week – we had NY-SERVICE03 in rotation for the shares and could do such work entirely remotely. For redis we always have a slave chain happening, it looks like this: This means we can do an upgrade/failover/upgrade without interrupting service at all. After all those buildouts, here’s the super fancy new web tier installed:

To get an idea of the scale of hardware difference, the old web tier was Dell R610s with dual Intel E5640 processors and 48GB of RAM (upgraded over the years). The new web tier has dual Intel 2687W v3 processors and 64GB of DDR4 memory. We re-used the same dual Intel 320 300GB SSDs for the OS RAID 1. If you’re curious about specs on all this hardware – the next post we’ll do is a detailed writeup of our current infrastructure including exact specs.

Day 4 (Thursday, Jan 29th): I picked a fight with the cluster rack, D. Much of the day was spent giving the cluster rack a makeover now that we had most of the cables we needed in. When it was first racked, the pieces we needed hadn’t arrived by go time. It turns out we were still short a few cat and power cables as you’ll see in the photos, but we were able to get 98% of the way there.

It took a while to whip this rack into shape because we added cable arms where they were missing, replaced most of the cabling, and are fairly particular about the way we do things. For instance: how do you know things are plugged into the right port and where the other end of the cable goes? Labels. Lots and lots of labels. We label both ends of every cable and every server on both sides. It adds a bit of time now, but it saves both time and mistakes later.

Cable labels!Web servers without labelsWeb servers with labels!Web server rear labels

Here’s what the racks ended up looking like when we ran out of time this trip:

It’s not perfect since we ran out of several cables of the proper color and length. We have ordered those and George will be tidying the last few bits up.

I know what you’re thinking. We don’t think that’s enough server eye-candy either.

Here’s the full album of our move.

And here’s the #SnowOps twitter stream which has a bit more.

What Went Wrong

  • We’d be downright lying to say everything went smoothly. Hardware upgrades of this magnitude never do. Expect it. Plan for it. Allow time for it.
  • Remember when we upgraded to those new database servers in 2010 and the performance wasn’t what we expected? Yeah, that. There is a bug we’re currently helping Dell track down in their 1.0.4/1.1.4 BIOS for these systems that seems to not respect whatever performance setting you have. With Windows, a custom performance profile disabling C-States to stay at max performance works. In CentOS 7, it does not – but disabling the Intel PState driver does. We have even ordered and just racked a minimal R630 to test and debug issues like this as well as test our deployment from bare metal to constantly improve our build automation. Whatever is at fault with these settings not being respected, our goal is to get that vendor to release an update addressing the issue so that others don’t get the same nasty surprise.
  • We ran into an issue deploying our web tier with DSC getting locked up on a certain reboot thinking it needed a reboot to finish but coming up in the same state after a reboot in an endless cycle. We also hit issues with our deployment of the git client on those machines.
  • We learned that accidentally sticking a server with nothing but naked IIS into rotation is really bad. Sorry about that one.
  • We learned that if you move the drives from a RAID array from an R610 to an R630 and don’t catch the PXE boot prompt, the server will happily boot all the way into the OS.
  • We learned the good and the bad of the Dell FX2 IOA architecture and how they are self-contained switches.
  • We learned the CMC (management) ports on the FX2 chassis are effectively a switch. We knew they were suitable for daisy chaining purposes. However, we promptly forgot this, plugged them both in for redundancy and created a switching loop that reset Spanning Tree on our management network. Oops.
  • We learned the one guy on twitter who was OCD about the one upside down box was right. It was a pain to flip that web server over after opening it upside down and removing some critical box supports.
  • We didn’t mention this was a charge-only cable. Wow, that one riled twitter up. We appreciate the #infosec concern though!
  • We drastically underestimated how much twitter loves naked servers. It’s okay, we do too.
  • We learned that Dell MD1400 (13g and 12Gb/s) DAS (direct attached storage) arrays do not support hooking into their 12g servers like our R620 backup server. We’re working with them on resolving this issue.
  • We learned Dell hardware diagnostics don’t even check the power supply, even when the server has an orange light on the front complaining about it.
  • We learned that Blizzards are cold, the wind is colder, and sleep is optional.

The Payoff

Here’s what the average render time for question pages looks like, if you look really closely you can guess when the upgrade happened: Question page render times The decrease on question render times (from approx 30-35ms to 10-15ms) is only part of the fun. The next post in this series will detail many of the other drastic performance increases we’ve seen as the result of our upgrades. Stay tuned for a lot of real world payoffs we’ll share in the coming weeks.

Does all this sound like fun?

To us, it is fun. If you feel the same way, come do it with us. We are specifically looking for sysadmins preferably with data center experience to come help out in New York. We are currently hiring 2 positions:

If you’re curious at all, please ask us questions here, Twitter, or wherever you’re most comfortable. Really. We love Q&A.

09 Jan 04:25

Meet the Micro Soldering Mom

by Julia

When her kids matter-of-factly reported that the toilet in their New York home was acting up, Jessa Jones-Burdett didn’t initially suspect that anything was amiss. After all, in a home with four small kids and two adults, all things—toilets included—are subject to a little extra wear and tear. A finicky toilet was just par for the course.

Later when she noticed her iPhone was missing, Jessa still didn’t realize that something was wrong. The kids were constantly picking it up and depositing it somewhere else in the house. She was sure she’d find the errant phone eventually.

It wasn’t until Jessa checked Find My iPhone and saw an angry ‘X’ instead of a location that the truth of what happened hit her like a truck: the plugged toilet was not a coincidence. One of the kids had flushed her phone into the plumbing. It was down there still—jammed in the toilet bend. And Jessa had to get it out.

She snaked the line, trying to dislodge the stubborn phone. But it was no use. The iPhone was iStuck.

“I got so frustrated with that project,” Jessa recalled. “I hauled that toilet out to the front yard and [...] I sledgehammered the thing right in the front yard. And there it was–my iPhone, right in the bend of it.”

Jessa’s two boys pose with the iToilet, before it was sledgehammered to bits and pieces.

Once liberated, the phone was waterlogged but surprisingly intact. After cleaning with alcohol and drying, the phone turned on. The screen worked fine, and the camera was undamaged. But the phone wasn’t charging anymore. After some hunting around on iFixit’s troubleshooting forums, Jessa determined that a tiny charging coil had fritzed out during the iPhone’s underwater excursion.

“And it just seemed like a minor little problem,” said Jessa. “So I started looking into how to restore that one tiny function of the phone.”

That investigation would change her life. A couple of years after the toilet incident, and Jessa is now a master of gadget repair, a micro soldering expert, and a proprietress of a thriving board-level repair business: iPad Rehab. All that while balancing her role as a stay-at-home mom.

Once a fixer, always a fixer

So, how does a busy mom land on electronics repair as a vocation? Turns out, fixing electronics wasn’t much of a leap at all. Jessa’s always been a natural tinkerer and the family handywoman. She’s always been a problem solver. And she’d already devoted most of her life to fixing things—it’s just that the things Jessa was accustomed to fixing were organic, instead of mechanical.

After attending University of Maryland at College Park to study molecular biology, Jess earned a PhD in human genetics from Johns Hopkins School of Medicine. She studied DNA mutations and their connection to diseases like cancer. And she chose that field because she wanted to fix the human body on a cellular level.

“I’m not really that keen on learning for the sake of understanding. I like to understand for the sake of fixing,” Jessa explained.

After Johns Hopkins, Jessa had two sons and taught biology at a New York university. Life was busy, but it was good. Then (as it often does), life threw her and her husband, Jeff Burdett, a twist. A big one: twin girls. She left her position at the college and settled into the business of raising the kids as a stay-at-home mom.

It wasn’t until the iPhone found its way down the toilet that Jessa rekindled her fascination for fixing things. And since she’s a molecular biologist by training, it was no wonder that she gravitated towards fixing really, really small things.

Micro-scopic repair

The flushed iPhone presented Jessa with an interesting challenge. Fixing the charging coil required delicate repairs directly to the motherboard. Basically, Jessa needed to perform brain surgery on her phone.

“There’s a lot of people who think ‘Oh, if I mess something up on the motherboard, then that’s the end of the line.’ You need to replace the whole motherboard, which is essentially the device. And that’s not true,” Jessa explained. “A lot of components on the motherboard are like little tiny LEGOs. You find the one that’s broken, then you can pick it up and put another one on.”

But that process—picking one micro-component off the board and replacing it—requires micro soldering, a precision trade not widely practiced in the US. Mostly it’s done overseas, in places like China, and India, and Eastern Europe. Places where resources and replacement parts are a bit more scarce.

Jessa in her former dining room.

“It seemed that people, especially in the Eastern European group, have a different pressure to repair than we do,” Jess said of her initial investigation into micro soldering. “It may be less easy to just go down to the Apple store and get another iPhone, so you have a greater pressure to repair what you have. They are just total masters of repair.”

Jessa found the experts online, and asked them to teach her everything they knew. She bought the right equipment and she practiced on dead phones. It took a year of trial and error, but Jessa taught herself how to microsolder. And then, two years ago, she decided to put her new skill to good use: she started running MommyFixit—a general device repair service—out of her home.

Jessa repaired broken screens, swapped batteries, and fixed motherboards. But the demand for board-level repairs was so high that eventually she transitioned to specialty micro soldering. MommyFixit became iPad Rehab. Now she does 20 to 30 repairs each week (“all day, every day,” she said with a laugh). Mostly, her clients are other repair shops. They send iPad Rehab the boards they’ve messed up. She fixes the boards at home and sends them back.

Essentially, Jessa is a resurrectionist. She resuscitates devices that are beyond reclamation—the ones that are well and truly dead.

“There is of course the personal satisfaction in taking something that is a paperweight and returning it to life again,” Jessa said. “That always is a drug-like, positive experience.”

And she’s trying to share the joy of fixing with her kids. Jessa’s seven-year-old son can do iPad screen repairs. Her nine-year-old son enjoys soldering. And her twin daughters consider Jessa’s toolbox as an extension of their own toy box. In the Burdett household, you see, repair is very much a family activity.

“Look, Ma, new screen!”: Jessa’s son, Sam, holds up an iPad he just repaired.

Mobilizing repair moms

If Jessa has her way, she’s not going to be the only mom in town who repairs electronics. In fact, she’s training other moms as mobile repair technicians. She teaches them repair skills, gives them a place to practice, gets them parts, and instills them with the confidence they need to start their own MommyFixit repair businesses. It’s a job they can do without feeling like they’re sacrificing their family, Jessa explained. Moms can repair phones while the kids are at school, or salvage an iPad while the toddlers are napping.

“The stay-at-home-mom community is huge and full of talent,” Jessa explained. “Everybody would like to have some way to make some money that allows them to be flexible and lets them use their brains. And there’s really no reason that repair can’t do that. Women, in particular, are fantastic at repair of tiny devices.”

Better yet, an at-home repair business doesn’t involve selling weird lotions or jewelry or knives. No cult-like, multi-level marketing seminars. No aggressive sales pitches for friends and family. No quotas. Just screwdrivers, a workspace, and some repair parts—then they, too, can learn the satisfaction of bringing a dead device back to life. And make a little cash along the way.

UPDATE: Many readers have inquired what sort of equipment Jessa is using. So, we asked. Her soldering station is the Hakko FM-203, with FM-2023 hot tweezers, FM-2032 micro pencil, and a standard Hakko regular iron. She also has a Hakko hot air station. For her microscope, she’s using the AmScope SM-4TZ-144A Professional Trinocular Stereo Zoom Microscope.

Check out Jessa’s website. It’s got a ton of information on gadget repair and micro soldering. And follow her on Facebook for more repair tips and tricks.

09 Jan 04:18

How to Listen to Your Body (and Become Happy Again)

by Jessica Blanchard

Happy Jumping Woman

“Keeping your body healthy is an expression of gratitude to the whole cosmos—the trees, the clouds, everything.” ~Thich Nhat Hanh

It’s embarrassing, isn’t it?

You don’t want to make a fuss about tiny health annoyances.

But you feel lethargic for no apparent reason. You get constipated, especially when you travel. You have difficulty sleeping.  And your hormones are all over the place.  You hold onto that niggly five or ten pounds like your life depends on it.

Sound familiar? I’ve been there too.

I was working at a dream job and living on the French Riveria. I was paid a lot of money to help Fortune 500 Companies with their IT strategies.

I worked in cities like Paris, Dublin, London, and Manchester during the week, staying in luxury hotels and flying to my home in Nice on weekends. We partied like rock stars on the beaches, and in exclusive clubs and glamorous villas. At twenty-nine, I was a management-level executive on the cusp of becoming a partner.

Meanwhile, my body wasnt happy. I was chronically tired. I slept poorly. And despite daily exercise and yoga, I couldnt figure out my weight gain.  

I tried the radical Master Cleanse—drinking lemon juice and maple syrup for a week. But the extra weight would creep back.

My hormones went crazy. When I stopped birth control pills, my menstrual cycles stopped. I wasn’t sure if that was the reason for my blotchy skin and depression. And the worst part was my mood. I wasn’t happy, despite all the glitzy outside trappings.

The One Thing Most People Never Learn To Do

Then I did something most people never learn to do: I listened.

I felt great after practicing yoga. I took a baby step: I practiced more yoga and eventually attended teacher training sessions. Fast-forward a couple years….

I quit my job, packed my belongings, and moved to a yoga retreat center in Thailand. The move felt natural and organic.

I lived simply in a tiny bungalow and taught yoga retreats to tourists. And my health improved. I was sleeping well. My periods eventually returned. I felt better and better, and my sparkle returned too.

The first and most important step is to stop and listen. Your body and mind are intimately connected. Listen to your body and you’ll learn a ton. Start with tiny steps and you’ll reach your pot of gold quicker than you’d expect.

You can do this.

You’d think doing so would be impossible, but it’s not. I’ll tell you how.

But first, let’s look at three core principles that could save you.

Don’t Make This Monumental Mistake

Most people ignore their small but annoying health issues. Nothing about your health is inconsequential. Everything matters. Your digestion. Your ability to lose belly fat. Your bowel movements.

You’re not alone if you want to run screaming and bury your head in the sand. How about changing your mindset?

Rather than categorizing what is wrong with you, notice how your body throws you clues. For example, you aren’t going to the bathroom every day. Usually for a very simple reason—lack of dietary fiber. Try adding an apple and ground flax to your breakfast and see what happens.

The Alarming Truth About Stress

It can make or break your healthiest intentions. When we perceive danger, stress is our body’s natural response.

For cave people, stress came when a lion was about to pounce; we needed to run like lightning.

Under stress, we optimize our resources for survival and shutdown non-essential functions. Translation? Your digestion grinds to a halt, your sex hormones (estrogen, progesterone, and testosterone) convert to cortisol, and your blood sugar skyrockets.

This is okay now and then. Are you in a state of constant, low-grade stress? Imagine the havoc and inner turmoil.

A few condition-linked stresses include IBS, constipation, weight gain, insomnia, high blood sugar, and hormone irregularities—for women, missed or absent periods, severe PMS, and fertility issues. And these are just the tip of the iceberg.

Your body and mind are like the matrix.

The Western approach to medicine is to examine each problem separately, so you end up with a different specialist for each malady.

In Eastern medicines, your body is a united whole rather than a constellation of unrelated parts. Your insomnia may be the result of high stress. Or your constipation and weight gain may be due to a complete absence of fiber in your diet.

Now let’s talk about what you need to do.

But first, I must introduce you to your personal, world-class health advocate. And it’s not your doctor, your chiropractor, or even your yoga teacher.

It’s you.

1. What silence can teach you about listening. 

Set aside time to listen to your own deepest wishes. I searched for answers outside of myself, looking for rigid rules and diets. I used food to shut off my thoughts. It was hard, but I gradually let my truths surface. I know you can do it too. Decide on a time, and set aside ten minutes each day. Breathe deeply and listen.

How are you feeling physically, mentally, and emotionally?

Have a journal nearby to jot down any thoughts. Notice what pops into your head. Bring yourself back to your breath if you start to get lost in thoughts.

2. What would happen if you followed your passions right now?

You can do this right now in tiny steps. Make time to do the things you love.

How do you most want to spend each day? Write a list of your priorities and brainstorm easy solutions.

Exercise: wake up twenty minutes earlier. Do a series of sit-ups, push-ups, leg lifts, squats, etc.

Time with your children: say no to superfluous activities—committees, boards, etc.

More creative time: schedule your time on weekends for writing, painting, or whatever you love.

Treat it like a priority appointment.

When I worked at a corporate job, I’d wake early to practice yoga at home before work. I didn’t miss the sleep, and I was much more productive and happier during the day. I couldn’t control the rest of the day, but I relished my sacred morning ritual.

3. Say goodbye to your job if it makes you unhappy.

Right now, maybe you need it to support your family. No problem. Make sure you limit your working hours. Make the rest count.

Turn off your TV and put away your iPhone. Spend engaged time with your family. Thinking about work takes you away from important leisure activities.

Your people will always be important—your children, parents, siblings, friends, and your tribe. Don’t sweat the little things. Cultures with high longevity emphasize personal relationships, support networks, and family. The elders are the big shots, not the richest in the village.

4. How to glow from the inside out.

We are genetically wired to thrive on a whole-foods diet. A rule of thumb: the more processed the food, the less you should eat.

Most of the diets that actually work—paleo, low-carb, and vegan—all have whole foods at their base. They vary in content, but all encourage vegetables, fruits, and good-quality protein sources.

Return to those niggly health issues. Take an honest look at your diet. What could you do better? What things would you be willing to change?

I used to systematically overeat healthy foods. My diet was great, but I used foods, even healthy ones, to quell my inner unhappiness. I hated my job. I felt lonely and isolated.

Start with one change per month. Not more. Drink a glass of water with your meals and skip sugary drinks.Or eat a salad with your lunch or dinner.

5. Here’s a little-known secret about your mind.

How do you feel after eating a plate of fried foods? Or a big meal in a restaurant followed by dessert? I feel fuzzy and sluggish.

What about after eating a bowl of candy? Like a space cadet? Sugar spikes our blood sugar and makes concentration impossible.

Want to keep your mind clear and alert? Choose fresh vegetables and fruits, high-quality animal products, legumes like lentils and beans, healthy fats from nuts and seeds, and high-quality cold-pressed oils.

Why Most People Fail Miserably

Simply put, they don’t prioritize their own health. Don’t fall down that rabbit hole.

Your job is not to put everyone else’s health above your own.

Your job is not to make excuses about what you should be doing but aren’t.

Your job is to be your most enthusiastic health advocate. You must fight tooth and nail to make stellar choices for your health.

Your good intentions are worthless if you never take action. I’ve been there too. I’ve ignored my body. It was a mistake.

Start making tiny changes, like having oatmeal and an apple for breakfast. Notice how much better you feel. You’ll be chomping at the bit to do more.

Living well makes you feel better and happier. But it requires a little courage and determination.

Start with one tiny step in the right direction. Take five minutes now and decide what your first step is.

You know you deserve a healthier life.

And more happiness.

Happy jumping woman image via Shutterstock

Avatar of Jessica Blanchard

About Jessica Blanchard

Jessica Blanchard is a registered dietitian, longtime Ayurvedic practitioner, and yoga teacher. She’s on a mission to dispel dietary myths and make healthy habits accessible to everyone. Grab your free 7-Day Meal Plan at stopfeelingcrappy.com and feel healthier and fitter one bite at a time.

The post How to Listen to Your Body (and Become Happy Again) appeared first on Tiny Buddha.

06 Jan 12:03

Transforming Self-Criticism: Stop Trying to Fix Yourself

by Lauren Fire

Depressed Woman

“I define depression as a comparison of your current reality to a fantasy about how you wish your life would be.” ~Dr. John Demartini

I always wanted to do things “right.” I was the little kid at the front of the room, raising her hand for every question. I was great at pushing myself to succeed and please.

My drive to be perfect was an asset through college and law school. I rocked high grades and landed a big firm job right out of school. But that same drive drove me right into a therapist’s office at twenty-five, where I was diagnosed with severe depression.

Then just like any good perfectionist, I drove myself harder to overcome the depression, to be more perfect. I Cookie Monstered personal growth, intensely gobbling up books, lectures, retreats, and coaching.

Have you ever been cruising along, then suddenly realized you’ve been going the wrong way for a while?

When I had suicidal thoughts in my thirties after giving birth to my daughter, my intense drive came to a screeching halt. My desire to be perfect had driven me into a deep and scary postpartum depression.

My thoughts were no longer mine, and for the first time in my life I was afraid of what was happening in my head. Something had to shift.

So I went on a new journey, one designed to find out (for real this time) how to reduce the daily suffering that I knew I was causing myself. What I learned shifted my entire life. But I’m getting ahead of myself.

Let me walk you through my journey. Maybe you can discover something about yourself along the way.

To Motivate or To Berate—That is the Question

Like all good journeys, mine starts with a hero (me) and a villain (my inner critic voice). Now, that “little voice” for me was not little at all. It was more like the Stay Puft Marshmallow Man in Ghostbusters, the mean one with the scary eyes.

One day I decided to turn toward my Mean Marshmallow Man Voice and ask it questions. Why must I be perfect? Why are you always criticizing me?

“Because you’re not perfect.” It said, with a booming voice. “You’re not…” and then it went on to list about 2,000 things that I was failing to do, be, say, or accomplish.

But this time, when I pictured all of these 2,000 things, I started to imagine the person who would actually have done all of those things. Who would this person be, this perfect version of me? Let’s name her Perfect Lauren.

Well, let’s see. Perfect Lauren would never let the clothes on her floor pile up, or the mail go unread. Perfect Lauren wouldn’t spend hours watching The Walking Dead or surfing Facebook. Perfect Lauren would work out every day, in the morning, before work.

Perfect Lauren would eat extremely well and would skip Starbucks, no matter how much she loved Salted Carmel Mochas. Perfect Lauren would have a perfect meditation practice every day.

I saw my entire life flash before my eyes, one long comparison to Perfect Lauren and one long failure to measure up. Did I assume that with enough self-abuse, one day I would become Perfect Lauren? One day I would finally be this fantasy super mom who would always “have it together”?

Suddenly I realized that my immense drive, the one that had allowed me to be so successful, was not a drive toward the happiness I wanted. I was not driving toward anything at all. I was driving away from something.

I drove myself to avoid feeling shame, self-criticism, and self-hate. I drove myself to please the Mean Marshmallow Man Voice. I drove myself to avoid hating myself.

Why do you do things? Do you exercise, eat right, study, or work hard because you love yourself and want good for yourself? Or do you do these things to avoid shame and self-criticism?

I had spent my entire life motivating myself with negativity. And I was now paying the price.

Why It’s Hard to Change

Once I realized how much I compared myself to Perfect Lauren, I tried to stop. It seems simple. Just stop doing it.

But when I tried too hard, I kept getting stuck in this Dr. Seuss-like spiral of hating myself for trying to not hate myself. My former coach used to call that a “double bind,” because you’re screwed either way.

For me to finally learn how to change this, I first had to ask myself…why? And yes, I know that I’m starting to sound like Yoda, but follow me here.

Why did I need to compare myself to Perfect Lauren? Why did it matter? When I pulled at the thread, I found the sad truth.

I compare myself to Perfect Lauren because somewhere deep in my mind I believe that Perfect Lauren gets the love. Real Lauren doesn’t. So I must constantly push myself to be Perfect Lauren, never accepting Real Lauren.

Okay, that sounds ridiculous. When you highlight a belief, sometimes it can look like a big dog with shaved fur, all shriveled and silly. I don’t believe that at all.

I believe the Lauren that leaves clothes on the floor and loses the toothpaste cap deserves love! The Lauren who hates to unload the dishwasher and loses bills in a pile of mail, she deserves love too!

How to Transform Self-Criticism

Have you ever looked endlessly for something and then realized it was sitting right in front of your face? It turns out that the solution to my self-criticism and comparison was actually pretty simple—start loving myself more.

Now loving Real Lauren, with all faults, is not easy. But I’m trying.

Instead of pushing myself with shame, hate, and self-criticism, I am learning to motivate myself with praise. Instead of threatening myself, I am pumping myself up.

And this has changed everything. I actually get more done using positive motivation. And more importantly, I feel better about what I get done. I’m happier, calmer, and feel more at peace with my life.

If you want to shift your own self-criticism and free yourself from the tyranny of your Mean Marshmallow Man, stop trying to fix yourself and start trying to love yourself.

Here is a practical way to implement this into your life:

The next time you notice that you are criticizing yourself or comparing yourself to Perfect You, stop. Hit the pause button in your head.

Next, say, “Even though I… I love and accept all of myself.” So, for me today, “Even though I shopped on Zulily instead of writing this blog post, I love and accept all of myself.”

Now imagine that you’re giving yourself a hug, internally. Try to generate a feeling of self-compassion.

When you do this regularly, you will start to notice what I noticed. Love and self-compassion can shift even the strongest negative thoughts and emotions and allow you to enjoy more of you life.

And that’s the real goal here, isn’t it? If we keep driving ourselves using self-criticism, we will never be happy, no matter how perfect we are, because we won’t enjoy the process. We won’t enjoy the journey.

I believe that the happiest people in life aren’t the ones with the least baggage. They are just the ones who learned to carry it better so that they can enjoy the ride.

The more we generate self-compassion and love, the easier perfectionism and self-criticism will be to carry. And the easier it will be for us to love and enjoy this beautiful and amazing journey called life.

Depressed woman image via Shutterstock

Avatar of Lauren Fire

About Lauren Fire

Lauren Fire is the host of Inspiring Mama, a podcast and blog dedicated to finding solutions to the emotional challenges of motherhood and teaching simple and practical happiness tools to parents. Get her free happiness lesson videos by joining the Treat Yourself Challenge - 10 Days, 10 Ways to Shift from Crappy to Happy.

The post Transforming Self-Criticism: Stop Trying to Fix Yourself appeared first on Tiny Buddha.

06 Jan 11:52

Are your apps leaking your private details?

by Troy Hunt

For many regular readers here, this is probably not overly surprising: some of your apps may do nasty things. Yes, yes, we’re all very shocked about this but all jokes aside, it’s a rather nasty problem that kids in particular are at risk of. There was a piece a few days back on Channel 4 in the UK about Apps, ads and what they get from your phone where a bunch of kids had their traffic intercepted by a security firm. The results were then shared with the participants where their shocked responses could then be observed by all.

I got asked for some comments on this by SBS TV here locally which went to air last night:

This brings me to the two points I make in the video:

  1. Get your apps from the official app stores. Take apps from nefarious sources outside of there (primarily Androids and jail-broken iOS devices) and you have no certainty of the integrity or intent of what you’re getting.
  2. Read the warnings your device gives you! Modern mobile operating systems are exceptionally good and “sandboxing” apps, that is ensuring they run without access to other assets on the device unless you give them your express permission!

When we see kids’ photos being accessed via third party apps, it’s almost certainly because they’ve accepted a prompt just like this:

My Tom asking for access to the microphone

Now this is a simple decision – do you really like “My Tom” enough to allow it to listen to you whilst the app is running? Perhaps, but what if it asked for access to your photos? Or your contacts? You might have the common sense to reject that but kids, not so much. They see a prompt where the path forward is “OK” and just as the girl in the Channel 4 piece says, they don’t read the terms and conditions and instead just immediately jump in. Come to think of it, it’s not just kids that do that!

Apps accessing personal data such as the address book is serious business. A few years back there was an uproar around the Path app sending users’ entire address book back to their servers. Apple was decidedly unimpressed about the whole affair and as they say in that link:

Apps that collect or transmit a user’s contact data without their prior permission are in violation of our guidelines. We’re working to make this even better for our customers, and as we have done with location services, any app wishing to access contact data will require explicit user approval in a future software release.

Several years on, things are certainly better but that one great security risk we’ve always had still remains – gullible humans!

02 Dec 12:42

Cranberry Moscow Mule

by Susan Palmer

Add a festive flair to your holiday season with Cranberry Moscow Mule and Ginger Sugared Cranberries. 

Cranberry Moscow Mule #HolidayFoodParty from The Girl In The Little Red Kitchen

Let the countdown begin folks.  Only 24 days left until Christmas and 15 days until the first night of Chanukah.  The holiday party invitations are starting to show up and your inbox is overflowing with countless offers to buy this, save on that!

To make your life a little easier my Holiday Food Party friends and I have put together some delicious seasonal recipes for your holiday gatherings.  This is a one stop shop right here, with cocktails, cakes, chocolate and more!

Now, no party is complete without holiday cheer and I like mine to be in the form of a refreshing cocktail to enjoy throughout the night.   Cranberry Moscow Mules will cheer up any grinch with the sparkling candied cranberry garnish and cranberry ginger bite.

Cranberry Moscow Mule- The Girl In The Little Red Kitchen

This cocktail does take some advance planning in order to make your candied cranberries and simple syrup but once that’s done, stirring things up is as easy as 1,2,3 (and okay 4)!

Instead of watering the drink down with additional cranberry juice to ginger beer, I made a cranberry ginger simple syrup from the mix to candy the cranberries.  There is no waste going on in this cocktail!

Plus you can choose the intensity of the ginger depending on your own taste.  Really love the ginger kick? Then instead of strips, shred the ginger to allow some of the juices to mingle with the simple syrup and let it steep for a longer period.  Want less? Just take it out while the cranberries dry.

Oh and did I mention the syrup would make a great holiday gift? Bottle it up and bag it with a bottle of your favorite vodka!

Ginger Sugared Cranberries from The Girl In The Little Red Kitchen

5.0 from 1 reviews
Cranberry Moscow Mule
 
Author: Susan Palmer
Recipe type: Beverage, Cocktail
Ingredients
For the Sugared Cranberries & Cranberry Ginger Simple Syrup:
  • 1 cup cane or granulated sugar
  • ¾ cup water
  • 1 cup fresh ginger cut into strips
  • 1½ cups fresh cranberries
  • ½ cup cane or raw sugar for coating
For the cocktail
  • 1½ ounces Cranberry Ginger Simple Syrup
  • juice from ½ lime
  • 2 ounces vodka
  • club soda
  • ice
  • sugared cranberries for garnish
Instructions
  1. In a medium sauce pan bring the water, ginger and sugar to a simmer over medium heat until the sugar is dissolved. Add in the cranberries and let sit for about 1 minute. Turn off the heat before any cranberry starts to pop.
  2. Remove from the heat and place a small dish or bowl inside the pan, on top of the cranberries to weigh them down and steep in the liquid. Let sit at room temperature for about 1½ - 2 hours or to infuse the ginger flavor for longer you can steep in your refrigerator overnight.
  3. Remove the plate and with a slotted spoon remove the cranberries, leaving the ginger back in the simple syrup, placing the plate back over to hold it down.
  4. Place the sugar on a small rimmed baking sheet with enough space for the cranberries to lay in an event layer and coat the cranberries completely.
  5. Let the cranberries air dry at room temperature for about 2 to 3 hours.
  6. Thread cranberries on a toothpick as garnish for the cocktail and keep extra in an airtight container in a cool spot for about a day.
  7. Strain the remaining ginger in the simple syrup and place in a jar.
To make the cocktail:
  1. In a collins glass rub the rim with a little simple syrup and dip in some leftover sugar from the cranberries for a sugared rim.
  2. Pour in the simple syrup, lime juice and vodka. Stir and add ice, top with club soda.
  3. Serve with a garnish of sugared cranberries.
  4. Makes 1 cocktail
3.2.2807

 HFP-Christmas-2014Let’s see the rest of the great recipes we are whipping up for this Holiday Food Party!

 

  1. Chocolate Peppermint Bark from Cravings of a Lunatic
  2. Buche de Noel from That Skinny Chick Can Bake
  3. Cranberry Moscow Mule from The Girl in the Little Red Kitchen
  4. Chocolate Gingerbread Crumb Cake from Hungry Couple
  5. Gingerbread Cupcakes with Chai Spiced Frosting from Jen’s Favorite Cookies
  6. Apres Ski Boozy Tea from Pineapple and Coconut
  7. Raspberry Almond Torte from Magnolia Days
  8. Fruit and Nut Bars from What Smells So Good

 

 

 

Cranberry Moscow Mule | girlinthelittleredkitchen.com

Original article: Cranberry Moscow Mule

©2014 The Girl in the Little Red Kitchen. All Rights Reserved.

The post Cranberry Moscow Mule appeared first on The Girl in the Little Red Kitchen.

15 Sep 12:53

10 things I learned about rapidly scaling websites with Azure

by Troy Hunt

This is the traffic pattern that cloud pundits the world over sell the value proposition of elastic scale on:

Sessions going from barely anything to almost 12k an hour almost immediately

This is Have I been pwned? (HIBP) going from a fairly constant ~100 sessions an hour to… 12,000 an hour. Almost immediately.

This is what happened last week when traffic literally increased 60-fold overnight. September 10 – 2,105 sessions. September 11 – 124,036 sessions. Interesting stuff happens when scale changes that dramatically, that quickly so I thought I’d share a few things I learned here, both things I was already doing well and things I had to improve as a result of the experience.

Oh – why did the traffic go so nuts? Because the news headlines said there were 5 million Gmail accounts hacked. Of course what they really meant was that 5 million email addresses of unknown origin but mostly on the gmail.com domain were dumped to a Russian forum along with corresponding passwords. But let’s not let that get in the way of freaking people out around the world and having them descend on HIBP to see if they were among the unlucky ones and in the process, giving me some rather unique challenges to solve. Let me walk you through the important bits.

1) Measure everything early

You know that whole thing about not being able to improve what you can’t measure? Yeah, well it’s also very hard to know what’s going on when you can’t empirically measure your things. There were three really important tools that helped greatly in this exercise:

Google Analytics: That’s the source of the graph above and I used it extensively whilst things were nuts, particularly the real time view which showed me how many people were on the site (at least those that weren’t blocking trackers):

Google Analytics showing 1175 people on the site right now

New Relic: This totally rocks and if you’re not using on your Azure website already, go and read Hanselman’s post about how to get it for free.

NewRelic web transactions response time graph

In fact this was the first really useful tool for realising that not only did I have some serious load, but that it was causing a slowdown on the system. I captured the graph above just after I’d sorted the scaling out – it shows you lots of errors from about 2:30am plus the .NET CLR time really ramping up. You can see things improve massively just before 6am.

NewRelic was also the go-to tool anytime, anywhere; the iPad app totally rocks with the dashboard telling me everything from the total requests to just the page requests (the main difference being API hits). The particularly useful bits were the browser and server timings with the former including things like network latency and DOM rendering (NewRelic adds some client script that does this) and the latter telling me how hard the app was working on the server:

NewRelic dashboard on the iPad

Azure Monitoring: You also get this for free and it’s part of the Azure Management Portal. This includes metrics on the sort of stuff you’re going to get charged for (such as data out) so it’s worth watching:

Azure's website monitoring graph

It also ties in to alerts which I’ll cover in a moment.

The point of all this is that right from the get-go I had really good metrics on what was going on and what a normal state looked like. I wasn’t scrambling to fit these on and figure out what the hell was going on, I knew at a glance because it was all right there in front of me already.

2) Configure alerts

I only knew there were dramas on Thursday morning because my inbox had been flooded with alerts – I had dozens of them and they looked like this:

Azure alert for high CPU usage

This is an Azure alert for CPU time and I also have one for when total requests go above a certain threshold. They’re all configuration from the monitoring screen I showed earlier and they let me know as soon as anything unusual is going on.

The other ones that were really useful were the NewRelic ones, in particular when there was a total outage (it regularly pings an endpoint on the site which also tests database and table storage connectivity) but also when the “Apdex” I mentioned earlier degrades:

Alert from NewRelic for a low Apdex

The Apdex is NewRelic’s way of measuring user satisfaction and what’s great about it is that it cuts through all the cruft around DB execution times and request queuing and CLR time and simply says “Is the user going to be satisfied with this response?” This is the real user too – the guy loading it over a crappy connection on the other side of the world as well as the bloke on 4G next to the data centre. I’m going off on a bit of a tangent here, but this is what happened to the Apdex over the three days up until the time of writing on Saturday morning:

NewRelic Apdex showing a big drop during high load

At its lowest point, over 25k people were sampled and way too many of them would have had a “Frustrating” experience because the system was just too slow. It loaded – but it was too slow. Anyway, the point is that in terms of alerts, this is the sort of thing I’m very happy to be proactively notified about.

But of course all of this is leading to the inevitable question – why did the system slow down? Don’t I have “cloud scale”? Didn’t I make a song and dance recently about just how far Azure could scale? Yep, but I had one little problem…

3) Max out the instance count from the beginning

If the concept of scaling out is foreign or indeed you’re not familiar with how it’s done in Azure, read that last link above first. In a nutshell, it’s all about adding more of the same resource rather than increasing the size of the resource. In Azure, it means I can do stuff like this:

Setting the instance count in Azure

You see the problem? Yeah, I left the instance count maxing out at 3. That is all. That is what caused my Apdex to become unacceptable as Azure did exactly what I told it to do. This, in retrospect, was stupid; it’s there as a control to limit your spend so that you don’t scale up to 10 instances then a month later when your bill arrives, get a big shock, but if you’ve got alerts, it’s kinda pointless. Let me explain:

Azure charges by the minute. Spin up an instance, use it for an hour and a half, turn it off and you’ll only pay for 90 minutes worth of usage. Regardless of the size of the instance, 90 minutes is going to cost what for all intents and purposes is zero dollars. If you have alerts configured such as ones for unusually high requests (which you can do via the Azure Management Portal), you’ll know about the environments scaling up very soon after it happens, possibly even before it happens depending on how you’ve configured them. With the benefit of hindsight, I would have far preferred to wake up to a happy website running 10 instances and charging me a few more dollars than one in pain and serving up a sub-par end user experience.

Of course the other way of looking at this is why on earth would you ever not want to scale? I mean it’s not like you say, “Wow, my site is presently wildly successful, I think I’ll just let the users suffer a bit though”. Some people are probably worried about the impact of something like a DDoS attack but that’s the sort of thing you can establish pretty quickly using the monitoring tools discussed above.

So max out your upper instance limit, set your alerts and stop worrying (I’ll talk more about the money side a bit later on).

4) Scale up early

Scaling out (adding instances) can happen automatically but scaling up (making them bigger) is a manual process. They both give you more capacity but the two approaches do it in different ways. In that Azure blog post on scale, I found that going from a small instance to a medium instance effectively doubled both cost and performance. Going from medium to large doubled it again and clearly the larger the instance, the further you can stretch it.

When I realised what I’d done in terms of the low instance count cap, I not only turned it all the way up to 10, I changed the instance size from small to medium. Why? In part because I wasn’t sure if 10 small instances would be enough, but I also just wanted to throw some heavy duty resources at it ASAP and get things back to normal

The other thing is that a larger instance size wouldn’t get swamped as quickly. Check this graph:

Very fast change in the number of simultaneous sessions

That’s 727 sessions at midnight, 753 at 1am then 9,042 at 2 and 11,910 at 3am. That’s a massive change in a very small amount of time. Go back to that perf blog again for the details, but when Azure scales it adds an instance, sees how things go for a while (a configurable while) then adds another one if required. The “cool down” period between adding instances was set at 45 minutes which would give Azure heaps of time to see how things were performing after adding an instance and then deciding if another one was required. With traffic ramping up that quickly, an additional small instance could be overwhelmed very quickly, well before the cool down period had passed. A medium instance would give it much more breathing space.

Of course a large instance would give it even more breathing space. As it happened, our 2 year old woke up crying at 1am on Friday and my wife went to check on her. The worried father that I was, I decided to check on HIBP and saw is serving about 2.2k requests per minute with 4 medium instances. I scaled up again to large and went back to bed – more insurance, if you like. (And yes, the human baby was fine!)

5) Azure is amazingly resilient

Clearly the HIBP website got thoroughly hammered, there’s no doubt about that. What tends to happen when a site gets overwhelmed is that stuff starts going wrong. Obviously one of those “going wrong” things is that it begins to slow down and indeed the Apdex I showed earlier reflects this. Another thing that happens is that the site crumbles under the load and starts throwing errors of various types, many of which NewRelic can pick up on. Here’s what it found:

Error rate at 0.0078%

The key figure is in the top right corner – 0.0078% error rate or in other words, that’s 1 in every 128,000 requests that resulted in an error over the week preceding the time of writing. Now of course that’s only based on the requests that the site could actually process at all and consequently NewRelic could monitor. Those red lines are when HIBP was deemed to be “down” (NewRelic remotely connects to it and checks that it’s up). Having said that, I’ve seen NewRelic report the site as being “offline” before and then been able to hit it via the browser no problems during the middle of the outage anyway. The ping function it hits on the site shows a fairly constant 2.66 requests per minute so of course it’s entirely possible it was up within a reported outage (or down within a reported uptime!):

Ping throughput rate at 2.66RPM

Inevitably there would have been some gateway timeouts when the site was absolutely inundated and hadn’t yet scaled, but the fact that it continued to perform so well even under those conditions is impressive.

6) Get lean early

There’s a little magic trick I’ll share with you about scale – faster websites scale better. I know, a revelation isn’t it?! :)

Back in December I wrote about Micro optimising web content for unexpected, wild success. As impressive as the sudden popularity was back then, its paled in comparison to last week but what it did was forced me to really optimise the site for when it went nuts again, which it obviously did. Let me show you what I mean; here’s the site as it stands today:

The present day HIBP website

Keeping in mind the objective is to scale this particular website as far as possible, let’s look at all the requests that go to haveibeenpwned.com in order to load it:

4 requests being made to HIBP

Wait – what?! Yep, four requests is all. The reason the requests to that specific site are low is threefold:

  1. I use public CDNs for everything I can. I’m going to come back and talk about this in the next point as a discrete item because it’s worth spending some time on.
  2. I use the Azure CDN service for all the icons of the pwned companies. This gets them off the site doing the processing and distributes them around the world. The main complaint I have here is that I need to manually put them in the blob storage container the CDN is attached to when what I’d really like is just to be able to point the CDN endpoint at the images path. But regardless, a minute or two when each new dump is loaded and it’s sorted. Update: A few days after I posted this, support for pointing the CDN to a website was launched. Thanks guys!
  3. All the JavaScript and CSS is bundled and minified. Good for end users who make less HTTP requests that are smaller in nature and good for the website that has to pump down fewer bytes over fewer requests. It’d be nice to tie this into the CDN service too and whilst I could manually copy it over, the ease of simply editing code and pushing it up then letting ASP.NET do its thing is more important given how regularly I change things.

But of course this can be improved even further. Firstly, 32KB is a lot for a favicon – that’s twice the size of all the other content served from that domain combined! Turns out I made it 64px square which is more than enough and that ideally is should be more like 48px square. So I changed it and shaved off half the size then I put that in the CDN too and added a link tag to the head of my template. There’s another request and 32KB gone for every client that loads the site and looks for a favicon. That’ll go live the next time I push the code.

Another thing that kept the site very lean was that there is almost no real processing to load that front page; it’s just a controller returning a view. It does, however, use a list of breaches from a SQL Azure database, but here’s what the throughput of that looked like:

Response time for the SQL Azure database

Huh – it’s almost linear. Yep, because it’s cached and only actually loaded from the database once every five minutes by the home page (it’s also hit by the ping service to check DB connectivity is up hence the ~2.5cpm rate in the graph). That rate changes a little bit here and there as instances change and it needs to be pulled into the memory of another machine, but it has effectively no effect on the performance of the busiest page. It also means that the DB is significantly isolated from high load, in fact the busiest query is the one that checks to see if someone subscribing to notifications already exists in the database and it looked like this at its peak:

Requests for checking a notification user in the DB

That’s only 3.43k calls over two hours or a lazy one call every two seconds. Of course it’s fortunate that this is the sort of site that doesn't need to frequently hit a DB and that makes all the different when the load really ramps up as database connections are the sort of thing that can quickly put a dent in your response times.

So if I’m not frequently hitting a DB, what am I checking potentially pwned accounts against? It’s all explained in detail in my post on Working with 154 million records on Azure Table Storage – the story of “Have I been pwned?” but in short, well, that heading kind of gives it away anyway – it’s Azure Table Storage. As that blog posts explains, this is massively fast when you just want to look up a row via a key and with the way I’ve structured HIBP, that key is simply the email address. It means my Table Storage stats looked like this:

Table Storage usage

Now this is weird because it has two huge peaks. These are due to me loading in 5M Gmail accounts on Thursday morning then another 5M mail.ru and 1M Yandex the following day. As busy as the website got over that time, it doesn’t even rate a mention compared to the throughput of loading in 11M breached records.

But we can still see some very useful stats in the lead-up to that, for example the average server latency was sitting at 7ms. Seven milliseconds! In fact even during those big loads it remained pretty constant and much closer to 4ms. The only thing you can really see changing is the success percentage and the simple reason for this is that when someone searches for an email account and gets no result, it’s a “failed” request. Of course that’s by design and it means that when instead of organically searching for email addresses which gets a hit about half the time, the system is actually inserting new rows therefore the “success” rate goes right up.

Wrapping up this section though, what I’m getting to is that the site was extremely efficient to begin with and because of that I got a lot more scale out of it than I would have otherwise. I can’t overstate the importance of this enough; optimisations are absolutely critical for growth and of course they make the thing a whole lot nicer to use even during normal usage patterns. Optimise early, before you actually have to.

7) Steal other peoples’ bandwidth now!

Right, so to the earlier point above about using public CDNs, I’m using these for everything I possibly can. That includes jQuery, Bootstrap (CSS and JS) and Font Awesome. It looks like this:

Requests to external CDNs

This is a great thing perf wise for a few reasons. Firstly, it saved HIBP from having to deal with four additional requests and 69KB for every new visitor that came by thus freeing it up to focus on other things. Ok, I would have bundled this in with the other JS and CSS on the site so the requests would be the same, but the bandwidth usage would be roughly the same which brings me to the next point: I didn’t have to pay for the data. Multiple that 69kb out by about a quarter million visitors over the busy period and we’re talking about 16GB of bandwidth I didn’t need to pay for.

The final point though is simply speed. So much of this exercise has been about getting that Apdex down and that includes everything it takes to make the page work for people. The public CDNs not only put the data in locations that are closer to those requesting it, because they’re big and popular (certainly the Google jQuery one is), there’ll be a certain portion of people that have previously loaded it from another site and already have it cached in their browser. You don’t get faster than pulling directly from local cache!

Actually, while we’re talking geography here, one thing that will inevitably be asked is why I don’t use Azure’s Traffic Manager and distribute the site out to various Azure data centres around the world. I could very easily do that (as Scott explains in that link) and I may well do so in the future, but at the moment the site still runs happily on a single instance the vast majority of the time. Putting another node in, say Europe, would literally double my website costs. Mind you, if ever it gets to a constant load where a single instance isn’t enough, that’ll be about the smartest thing to do as it’ll deal with the increased scale in a way that not only makes it much faster for those in the second location, but also literally doubles my ability to scale under serious load due to having a second data centre at my disposal.

8) Always, always be ready to release at the drop of a hat

Dealing with demand that ramps up this quickly and this unexpectedly requires you to be able to react quickly. One of the things I decided to do very early on is remove a piece of code I’d dropped in only a few days earlier. It was intended to spin off another thread and send an “event” to Google Analytics to log an API hit, something I was previously only tracking via the client side (the web page would miss people directly calling the API). I just wasn’t confident enough that all those outbound requests to Google Analytics via the server wasn’t having an adverse impact on the perf, so I killed it.

Here’s the point though: because the code was in a state where it was ready for release (I would building another feature out in a separate Git branch), all I had to do was make the change, push it up to GitHub then wait for the magic to happen:

Azure deployment history showing a release

This is a really low friction process too. There’s a little disruption on response time (I often see NewRelic saying things ran slower for a fraction of a minute), but it’s an easy, easy fix and it doesn’t impact people using the site in any tangible way. The point is that I could make changes and get them live at will. Had the demand kept escalating, I had other ideas on how I could reduce the load via code and the only barrier would be my ability to actually make the changes, never that I couldn’t get it live quickly and without impacting an environment under heavy load.

Of course part of the ability to release quickly is also down to the fact that I have build automation. About four years ago I wrote the You’re deploying it wrong! series and really harped on the value of automation. These days I cringe when I see people deploying websites on an ongoing basis by building it in Visual Studio or god forbid, FTP’ing partial pieces of the solution. There are so many reasons using tools like GitHub and Azure’s Kudu service to push the code up makes sense. Just do it and never look back!

9) Experiment with instance size and auto scale parameters to find your happy place

One of the things I found with this exercise that pleased me enormously is that the scaling up and down and out (and in?) process had no impact on performance of any note during the transition. I wasn’t sure when, say, scaling up to a larger instance size if it might result in failed requests as the logical infrastructure rolled over. Some graphs might have shown a small spike, but I saw no dropped requests or anything else to suggest that users were getting an unacceptable experience.

By Friday night things had started to taper off a bit and whilst I still kept the instance size at “medium”, it had dropped down to only one instance. Then two. Then one. Then two. Then, well, it looked like this (each colour is a different logical machine):

Fluctuating instance counts

It was yo-yo’ing. Two instances got the average CPU down beneath the threshold required to take an instance away, but one kept it above the threshold required to add another one. This was not a “happy place” and indeed every time it dropped back to one, NewRelic would report the browser Apdex dropping off a bit. Clearly my lower bound for the CPU range was too high and I needed to have it really cruising before taking an instance away so that I could be confident that halving the compute power wasn’t going to cause problems.

And there’s your other issue; when you drop from two to one instance, that’s 50% of your scale gone. That’s a big jump in one go and all things being equal, suggests that four small instances rather than two medium ones gives you a lot more tunability. Regardless, I scaled up the minimum instance range to two so I could sleep without stressing over it. The biggest change that would happen overnight now was a one third drop from three to two which is somewhat more palatable than losing half your horsepower.

Even come Sunday morning, things were still jumping around between one and two instances. I’d been doing a bunch of tweaking with the scale thresholds and clearly I’d left it in a bit of an unstable state. Mind you, the Apdex was now acceptable so users were happy, but there was no need to have instances coming and going so frequently.

Ultimately, I set it back to the defaults which means these conditions for adding an instance:

Default settings for scaling up

And these ones for taking an instance away:

Default settings for scaling down

As soon as I did that, everything settled down. Apdex was fine, instance count stayed stable on just one machine, everyone was happy.

What I will say though is that clearly, every app is different. There are different triggers that indicate changes in load that will likely require changes in scale and different points at which that should happen. You’re constantly trading off not wanting things to get slow in the first place versus not wanting costs to go up unnecessarily. Load testing will only tell you so much about this too; yes the previous tests I’d done were consciously focused on a very narrow aspect of performance (just the API) and there are some much more sophisticated approaches to replicating real-world user behaviour, but nothing tests a system like real world, non-fabricated, good old fashioned popularity!

10) Relax, it’s only money!

And now the question that so many people asked after all the dust had settled – how much did it cost me? All this scale ain’t cheap, right? Let’s break it down courtesy of Azure’s billing portal.

Firstly, here’s how the scale fluctuated over the full period of craziness:

image

One minor frustration is that there’s not really a single good visualisation of load and instances over time in either the “classic” portal or Microsoft’s new shiny one (which this one is from). Regardless, what it does show is a lot of scaling up and down which we kinda knew anyway, but it’s the cost side of it I’m most interested in now. (Incidentally, I’m seeing some oddness in CPU utilisation even after all the load has gone, something I’m yet to get to the bottom of.)

I used three types of resources in ways I wouldn’t usually do under normal circumstances. The first is “small” standard websites hours. Normally it’s simple – I leave it on all day and there’s a single instance so I pay for 24 hours a day. You can see it running a steady pace for the first few days in the graph below:

Small standard website usage

It jumps up on the 10th, again on the 11th then the 12th is actually quite low. By then I was running on mostly medium instances so the cost moves to another tier. Main thing from the graph above is that I consumed 25.80 hours of small standard website I wouldn’t normally have used. Obviously the numbers that exceed 24 hours in one day are due to having multiple simultaneous instances running.

Onto the medium scale. I normally wouldn’t run any medium instances and it was just the 11th and 12th where I’d manually scaled up and consumed hours:

Medium standard website usage

All the hours here can be attributed to the additional load so there’s 55.03 medium instances hours right there.

Onto the big guy and this is the one I scaled up at 1am on the 11th because I was paranoid then scaled back to medium after I got up later that morning:

Large standard website usage

It actually ran multiple large instances at once and quite clearly chewed up 20.34 hours over about a 7 hour period. Oh – incidentally, Azure bills down to a very fine grain so those 20.34 hours actually appear on the bill as 20.34083 hours.

Let’s add it all up and work out some costs:

Instance size Cost per hour Extra hours consumed Total cost
Small $0.10 25.08 $2.51
Medium $0.20 55.03 $11.01
Large $0.40 20.34 $8.14
Total $21.65

Yes, that is all – $21.65. Based on my personal coffee scale, that’s a few day’s worth of normal cappuccino consumption. In fact I spent more on coffee during the time HIBP supported those hundreds of thousands visitors than what I did on additional hosting!

But of course there are other costs that go up with load as well, such as bandwidth and storage transactions. But they don’t matter. Hang on – why not?! Let me illustrate by way of bandwidth:

Chart of outbound data usage

I used about an extra 20GB during the crazy periods, let’s see how much that costs:

20GB of data costing $1.80

Less than half a cappuccino. It’s the same deal with Table Storage transaction costs – they just don’t even rate a mention at the volumes we’re talking about here.

Call it all less than $25 for the entire exercise. This is precisely why I wrote Donations, why I don’t need them and why I’m now accepting them for “Have I been pwned?” – because the costs are just so low even when the scale temporarily spikes like this. I got some very kind donations during this period and they certainly covered the material costs, but as that blog post explains, the costs I really felt weren’t the ones that hit my hip pocket!

Summary

More than anything, what I took away from this experience is that short term large scale (relatively speaking) is dirt cheap. To that extent, I’d make sure that heaps more instances could be added early (I’ve already upped it to 10) and when I next see, say, four or more instances being used and load escalating quickly, I’d up the instance size very early as well. I’d far prefer to later make a decision to degrade the experience and save money than what I would suddenly realise that visitors are getting a sub-par experience because my scale couldn’t keep up with demand.

The other major thing is the observations around app readiness and dev ops. It is so important to have an app that’s already well-optimised and can be released quickly not just in scenarios like this, but even under normal circumstances. I cringe at some of the apps I’ve seen recently that have fundamental design flaws that mean they will be massively computationally expensive from day one – missing database indexes, heaps of stuff in session state, no bundling of clients assets – all of these things decrease the ability to scale and mean you’ll have to do it much earlier at much more cost. Same again when you look at deployment practices and it’s just one guy who compiles locally and pushes everything across the wire – it’s just entirely unnecessary.

But what I love most about services like Azure and the things this exercise has shown is that it truly democratises standing up websites of scale. Anyone can jump over to the portal right now and provision services that just a few years ago were the sole domain of infrastructure professionals working with “big iron” they’d order with lengthy lead times and serious cost. That you can now get started down that path on a cappuccino budget is enormously empowering.

Update, 21 Sep: I want to add an addendum here which might put some of the perf figures in perspective. After all this load had settled down, I noticed the CPU constantly increasing in utilisation even under light traffic loads. It would increase pretty incrementally up into 60% plus territory until the app was restarted then it would drop back to low single digit utilisation immediately. Obviously something was chewing up resources and I just couldn't explain it — I'd made no apparent changes in code that would account for the behaviour. Today, after filing a ticket with NewRelic on a separate issue, I've been advised by them that their current version (3.6.177.0) had been identified as buggy and leading to this behaviour. I rolled back to 3.4.24.0 and everything has gone back to normal. I've had a lot of API activity on the site today (up to 2k requests a minute) and I'm now seeing the CPU remaining very stable. The buggy NewRelic version was added on the 29th of August, a couple of weeks before the high traffic load. It's entirely possible I just never noticed the increased CPU because the organic load was light and I was also pushing a lot of changes very frequently that would reset the app and bring the CPU back down again. Of course it all begs the question — how much of the figures above were impacted by this? The answer is simple — I don't know! On the one hand, instances were being changed frequently enough that there was a lot of restarting of the app but on the other hand, clearly the site was placing an unnecessary burden on the CPU and that may well have been compounded under load. All I know is that it wasn't performing as well as it should but in the context of this blog post, it may well have helped demonstrate the scaling options, just don't use the figures as am emphatic "Azure does X when the traffic is Y" yardstick.

Update, 23 Sep: Further to the update above, I've just published Your Azure website CPU is going nuts and it’s not your fault which explains exactly what happened with the high CPU utilisation, what I saw before identifying the root cause and what I'm seeing now — massive difference!

14 Jul 23:37

Personal Productivity: Business vs. busyness vs. laziness

by Scott Hanselman

image

There's an ancient cliché that drives a lot of my thinking about personal productivity. "Excessive busy-ness is a common form of laziness."

Busy-ness in the Tibetan tradition is considered the most extreme form of laziness. Because when you are busy you can turn your brain off. You’re on the treadmill. The only  intelligence comes in the morning when you make your To Do list and you get rid of all the possible space that could happen in your day. - Elephant Journal, 2008

The Tibetan term lelo, as I understand it, begins to describe this kind of laziness.

Kausidya (Sanskrit; Tibetan Wylie: le lo) is a Buddhist term translated as "laziness" or "spiritual sloth".

Alan Wallace explains that kausidya (lelo in Tibetan) refers to a very specific type of laziness, that is concerned only with virtuous activity. Wallace explains from Wikipedia:

[...] lelo in Tibetan, is often translated as ‘laziness,’ but it is much more specific. If a person is working sixteen hours a day, hellbent on earning a whole lot of money with absolutely no concern for virtue, from a Buddhist perspective you could say that person is subject to lelo. A workaholic is clearly not lazy, but such a person is seen as lelo in the sense of being completely lethargic and slothful with regard to the cultivation of virtue and purification of the mind. Our translation of this term is ‘spiritual sloth,’ which we have taken from the Christian tradition, where it is very comparable to the Buddhist notion.

I'm not saying you're lazy so don't get mad quite yet. I'm saying that using "I'm too busy" as an excuse or a reason to not do something important to you, then you might want to give your situation a closer look. I'm saying that sometimes we are busy with work, but not the kind of work we should be busy with.

Sakyong Mipham states: "Speediness is laziness when we use it as a way to avoid working with our minds."

Of course, there's busy people who are literally on fire and being chased by ninjas, I'll give them a pass. But when someone says "I'm too busy" perhaps they are letting you know they are too important to talk to you, or they are just using it as an excuse to not engage. Often I've said in the past that "I'm busy" when I really mean "I'm not really that into your idea to take the time to think deeply about it."

So when we say "being busy is a form of being lazy" we're saying think about what's important, and think about the work you're doing. Is it moving the ball forward? Is it moving YOUR BALL forward. The Ball that you care about?

I have an hour set aside once a week that's for a meeting. The meeting is with myself. No one else comes to this meeting but me. I think about what I'm doing, where I'm going, and what I need to be working on. I use this meeting to think about the business and busyness of my previous week. I think about what busy work I did that was a waste of time, and try to setup myself up for success in the coming week.

My parents and brother are convinced that I'm too busy to hang out or have lunch. I constantly hear "Well, we didn't want to bother you." I'm never too busy for them. Time can be made. It's amazing how quickly a day of meetings (or a half-day) can be cancelled or moved. Days can be cleared and time can be made.

It's easy to get caught up in the chaos of business. It's fun to play Tetris with your Outlook calendar. It's satisfying to pack those productive meetings in and feel important and urgently needed. It's cathartic to delete email and think that getting rid of that email is moving my life forward, but often it's not. Often I'm just on a treadmill, running to keep up. I know this treadmill and my inertia keeps me going.

The hard work is to consciously step off the treadmill, step away, turn around and look at it. What can be removed? What can be refined? In what ways have we taught our bosses or co-workers how to treat us and our time?

I was in Egypt once and the hosts wanted to take me to the Sphinx, but I didn't want to miss a weekend with my sons. They may have thought me rude, but it was about consciously choosing one priority over another. I knew my time and my priorities and made a conscious choice on how I was going to spend it.

In what way are you buying into the idea of being always busy? What are you doing to find balance?


Sponsor: Thanks to friends at RayGun.io. I use their product and LOVE IT. Get notified of your software’s bugs as they happen! Raygun.io has error tracking solutions for every major programming language and platform - Start a free trial in under a minute!



© 2014 Scott Hanselman. All rights reserved.
     
13 Jul 23:04

#OurABC – except when it comes to the Windows Platform in Australia. Part 1: #OurABCToo

by Sheeds

ourABCrebrand

Long-time readers of WPDU would know that, well, for a long-time we have been engaging ABC Australia over their continued lack of support for Windows Phone. Moving into the second half of 2014 one would have hoped for some improvement in our predicament over ABC Apps and services for our devices. Unfortunately, the events of last week sadly prove otherwise. Not only have the ABC proved resolute in their stance of not bringing about official Apps for the Windows Phone Platform in Australia, they seem to have also acted in record time this week to deny WP8AU users the opportunity to access their most popular service via an unofficial App for Windows Phone.

But before we get to that – let’s refresh a little history.

We can start by categorically confirming where the major ABC Services/Apps are in relation to Windows Phone (smartphone) and Windows 8 (tablet/desktop) and catching up on the WPDU #OurABCToo journey to date.

Apple Store:

iPhone_ABC_Apps

Google Play Store:

GooglePlay_ABCApps

Windows Phone / Windows 8 Store:

Win8WP8_ABCApps

So. Still no Apps for Windows Phone or Windows 8 stores from the ABC. Further to this, their most popular service, ABC iView also does not support Internet Explorer on any WP7x or WP8x device – so there is no way to watch ABC iView content at all on a Windows Phone. This is also the case for the ABC Radio App – however ABC streams of radio content are also available in other App services like tunein radio (unofficial 3rd party delivery mechanism).

Unfortunately, in both a FOI reply to WPDU earlier this year – and as recently as this week to Gizmodo AU – the ABC (and members of the iView team) continue to perpetrate an apparent falsehood by claiming incorrectly:

“Auntie added that iView users who want to access the service on Windows Phone devices can navigate to the iView page via Internet Explorer on their devices.” (Gizmodo JUL 2014)

“I understand that access to iview is currently possible on Microsoft platforms including Windows 8 and Windows Phones via browser..” ABC FOI Officer formal reply to WPDU APR 2014

This is simply not good enough.

ABC Australia has long argued that their user statistics have not shown enough data logs/hits from Windows Phone users to support developing their content – despite iView’s charter/FAQ stating:

“The ABC seeks to make its publicly funded iview service available on third party devices and platforms wherever the public choose to consume media content” link.

SBS by contrast, place their audience – and delivery of SBS content to them – first. Consequently, with a fraction of the budget ABC Australia received from Australian income tax revenue, has managed to deliver the most comprehensive platform content delivery range of any media organisation in Australia. Caroline Bartle, Manager of SBS On Demand earlier this year confirmed to WPDU this was indeed their aim – to maximise the audience reach for the content SBS license and deliver.

SBS On Demand Smartphone/Tablet App availability:

  • iOS (iPhone, iPad & iPod Touch) SBS_Demand
  • Microsoft Xbox 360 and Xbox One
  • Sony PlayStation 3 and PS4
  • Windows 8
  • Windows Phone 7 & 8
  • Selected Samsung smart phones and tablets running Android OS 4.0+ and above
  • Selected Android smart phones and tablets running Android OS 4.03 and above
  • Kindle devices (2nd and 3rd Generation)

In 2013, after months of on and off engagement with the ABC, WPDU established a dialog with a representative from the digital team who was open to hearing more from the Windows Phone community. Following from this, we launched the #OurABCToo campaign. This asked #wp8au users and our community to use their devices to hit once per day over the weekend of 19th and 20th of October the ABC web pages for iView home, News home and Technology home.

As you can see from the analysis of the ABC web data below (gained from a FOI request launched by WPDU), during this 2 day campaign (we extended it to 3 days due to our article being published on the Friday for any trigger-happy clickers to be completely fair) even our limited reach to Windows Phone Users in Australia yielded substantial statistical results. The iView home page stats had a Windows tablet and Phone jump to the 2nd highest platform device hits behind iOS – and on the ABC technology page the #Lumia920 almost beat the iPhone for site visits during the campaign. A truly amazing result!

iView_OurABCToo

abctech_OurABCToo

Android who?

In discussing the newly launched “The Brief” tablet format News App released for – yes, iPAD and Android – tablets, ABC Managing Director Mark Scott stated “We know our audiences want ABC content anytime, anywhere.”

Anywhere except on Windows Phone – although in the case of The Brief, the ABC has at least confirmed they are looking at a Windows 8 version.

Peter Marks, iOS Developer, ABC TV Multiplatform stated in an interview this year that the updated release of the iOS iView App in 2014 even took advantage of new iPhone hardware releases “Recent Apple devices such as the iPhone 5S and the iPad Air have a new 64 bit CPU, and software built for arm64 gets a two times increase in performance and a reduction in the RAM used on these devices…..The user will find the app’s scrolling and transitions are smoother and generally more snappy.”

Sounds a bit like they are aiming for the smooth and fluid sort of scrolling you get on Windows Phone to us.

So here we have an example of the ABC updating it’s existing App – with resource’s and funds allocated to tweak the performance to it’s existing audience of iOS users – rather than grow their audience via adding a new Platform like Windows Phone.

Imagine if you will a Ford service centre – which has all it’s branding as “We Service Fords” in store, print, web, radio – across advertising, business cards, signage, newsletters etc. If you owned a Toyota, would you take your car there for a service? If you worked there – would you think that there are not many Toyota’s out there and not bother trying to market your services to them?

As of today – there are more Windows Phone devices in market in this country than there were iPhones when the first ABC iView App for iOS was developed and launched. Taking this further – there are more Windows 8.x devices (Desktops, Multi-form factor devices and Tablets) in market today than there are iPads in Australia.

However we are faced with a National Broadcaster who has not completed any studies into ROI, audience interest and participation (minus one recent survey hardly promoted by ABC digital). Our ABC Australia remain firmly blinkered in their inside out view that user logs of their sites and services tell them that Windows Phone users don’t visit their sites…don’t use their services.

WPDownUnder – and many more WP users across social media – would beg to differ – once the ABC stops ignoring us and pushing us away.

Sheeds.

<In part 2 we will outline the events of this week – regarding the ABC, an unofficial iView App, the reaction – and wash-up. Don’t miss it folks>

13 Jul 23:02

#ourABC – Now you see it, Now you don’t. Part 2: #OurABCToo

by Sheeds

ourABCrebrand2

It would seem that #ourABC embraces it’s audience equally – unless you are a Windows Phone user in this country. Not just content to have us as second class citizens, the ABC has made an exclusion zone of Australia for any Windows Phone official access of ABC TV and Radio content. This means that an ever increasing amount of Aussie smartphone owners are locked out of accessing the ABC’s licensed content – in direct conflict with the ABC’s own charter.

And as we will go on to show our readers, this week they would appear to have not only demonstrated their inconsistency in how they view and approach the major smartphone platforms, but also by their actions have left the ABC open to questions of bias, discrimination and even competence around their iView and Innovation department management of Windows Phone.

If you missed Part 1 – you should check it out. It has some important background to this week’s latest events in the unfolding saga of the ABC, iView and Windows Phone.

As we set out in our earlier post, the ABC has a long history of ignoring Windows Phone as a platform in Australia, not even bothering to seriously consider it for expanding their App services too. Despite YOY growth in the Asia-Pacific region of >100%, market share now approaching the 10% mark in Australia and strong enterprise and business sales, the ABC has consistently refused to engage with windows phone users on it’s mobile services. Despite the fact that there are more Windows Phones in-market in Australia now than there were iPhones when they launched their first iOS App….

Presently the ABC have no intentions, plans, strategy or roadmap to introduce apps like iView to Windows Phone. A reasonable person could therefore reach the conclusion that, with this being the case, independent developers should be able to create their own WP unofficial Apps for the growing number of users locked out by the ABC. A streaming only client on the secure Windows Phone OS (locked down by it’s very design) – using existing ABC API’s, developed at no cost or use of ABC resources in the current climate of an organisation facing the prospect of cutbacks by an incoming Liberal government looks like a win-win prospect. The ABC grows it’s audience/viewing-user base, Windows Phone users get an App and are no longer disadvantaged compared to their iOS and Android peers. What’s not to like about that?

Well – this week, for a matter of what may have ultimately only been for a few hours, this was the case as welcomed, albeit briefly, to the Windows Phone Store in Australia; iView FTW!

iviewftw_store

SheedsTweetOn Wednesday, a keen-eyed WPDU reader alerted us to the fact that they had discovered an unofficial iView App for WP8.1 in the Australian store. Naturally, given our long history advocating for this eventuality (official or unofficial) – we began alerting the Aussie WP community of this event.

The App was also launched in the Windows 8 store at the same time, most likely using some of the benefits in development of Microsoft’s new Universal App system.

iviewftw_Screenies

The app was basic – not having Live Tiles or some of the other unique WP platform features – however it was fully-functional, with smooth swipe and transition UI effects and accessing the full range of ABC iView channels/content areas. Like the official App on Android and iOS, it was a stream-only App, with no capability for downloading and saving the video streamed. Everything looked great.

…..for the few hours that it lasted.

Later the same day, in what must have just been a few hours hours since the App was discovered in the store, it was pulled from both the Windows Phone and Windows 8 stores. Probably following a “takedown” notice initiated by the ABC – although they refused to confirm to WPDU whether they had issued one when asked by us this week.

Subsequent to the App being removed, WPDU discovered that the Developer was a Microsoft Australia employee who had released it in his own name, presumably in his own time as developer. This was quickly captured by the register.co.uk, who reported on the situation with some commentary from both Microsoft and the ABC in a rapidly escalating situation.

WPDU then followed up from a meeting we held with Rebecca Heap, Head of TV Strategy & Digital Products and representatives from ABC Innovation and Corporate Affairs last week on the case for ABC Australia developing an iView App for Windows Phone users by asking them a number of formal questions on this matter:

  1. Did the ABC formally request or initiate a request to Microsoft that the App be removed/taken down from both the Windows Phone &/or Windows Stores?
  2. If so – how (and under what public T&C) did the ABC base this request?
  3. If so, was it a prompt (it was the same day the community became aware the App was out) and proactive action by the ABC to protect their content providers copyright?
  4. Did the iviewftw Windows Phone App (which was stream only per the official App) breach any public or private ABC guidelines, and can the ABC share those breach specifics with me?
  5. Is there an onus on the ABC through their content agreements and services like iView to proactively and effectively protect copyright – and how is this related to unofficial/official Apps as per the example yesterday of iviewftw?

Late Friday night, WPDU received a reply from ABC Australia. Rebecca confirmed (in line with prior public statements made to The Register);

“The ABC was aware of an app in the marketplace providing unofficial access to iView content, and is investigating this matter

The ABC is obliged to take reasonable action to ensure iView content is used within the rights we acquire

At present, the rights that we acquire do not allow for distribution of iView content outside of ABC-approved services”

If, as suspected, the ABC were behind the take-down requests in the marketplaces, then the ABC did indeed take incredibly prompt action in “protecting their content”. Of course, what’s also evident from this week’s events is that unlike some of it’s competitors, Microsoft’s App marketplaces for phone and tablet/PC have an extremely efficient and effective process for content providers and copyright holders to deal with or investigate any possible concerns over potentially infringing Apps. This is to Microsoft’s credit.

However this matter doesn’t – and shouldn’t end there. Let’s examine for a moment what the ABC has told us. Let’s look at their words….and let us consider their actions in relation to Windows Phone in contrast to other examples the ABC has shown us.

Forget for the moment the fact that a WP developer – in their spare time without any $ funding/budget – can create a functional, fluid and platform consistent UI for streaming-only iView content to WP8.1 devices. Put to one side that this makes a mockery of the fact that the ABC has not initiated any sort of ROI, feasibility or study into the merits of releasing an App for Australian WP users and that our own FOI request confirmed that no strategy, plans or analysis had been commenced or previously undertaken on WP service delivery.

What has our ABC shown us through how it has dealt with another smartphone platform which was also deprived of an official App for their devices until almost the end of 2013. What other precedents exist one asks?

aview_android

Meet aview.

When WPDU compared the Android experience, this little App (still in the store at the time of writing) jumped out at us. aview is an unofficial iView App with identical streaming-only access to the full range of ABC iView content as iviewftw offered briefly on Windows Phone. Furthermore, it first hit the Google Play store back in July 2013! This means it was available in Google’s phone and tablet App store for almost 5 months before the ABC launched their own official App in December 2013! It now has a lazy 10,000+ downloads and it’s average rating of 4.5 stars actually beats the official ABC App on the same platform!

Yes folks, that’s 5 months. Not 5 hours and a takedown notice! Still being in the store today – that means a total of almost 12 months that this unofficial App has been available, and almost 7 months alongside the ABC official App for Android, which was launched with fanfare and quotes such as

“The first version of the Android app ensures Australian audiences never miss a moment of their favourite ABC TV programs, with access to iview on a wider range of portable devices.”

So on a quick checklist on the above replies from the ABC:

  1. The ABC was aware of an app in the marketplace providing unofficial access to iView content, and is investigating this matter. Hard to miss an Android Unofficial App for 12 months. Gizmodo and other sites covered it. WPDU knows the ABC track device stats and delivery mechanisms. The anonymous Developer even discusses this in an interview with Ausdroid the month after it’s launch.
  2. The ABC is obliged to take reasonable action to ensure iView content is used within the rights we acquire. What – if any – action have they taken against aview/Google Store? We suspect nil. Or at least nothing that has resulted in the App being taken down. This was also discussed in the Ausdroid interview (link in Q1).
  3. At present, the rights that we acquire do not allow for distribution of iView content outside of ABC-approved services. This is at best misleading – and could potentially be perceived as an outright lie. The presence of the Android unofficial App aview directly challenges this statement by the ABC – unless the ABC can demonstrate this is an ABC approved service.

The ABC’s own charter/guidelines/FAQ state that “The ABC seeks to make its publicly funded iview service available on third party devices and platforms wherever the public choose to consume media content”. Furthermore “Use of third party applications that allow downloadable versions of ABC iview programs is therefore unauthorised and not supported in any way.

The aview App meets both of the above criteria. So does the iviewftw WP8.1 App. Why is one in the store with apparent complete inaction by the ABC – and the other is withdrawn in mere hours following discovery by the public. One could seemingly make the case the ABC is discriminating against one platform over the other.

If they have not released an official App for WP, and have no current plans to develop an official Windows Phone App for iView (and other ABC services) – why do they not provide a process for 3rd parties to develop an App? In the absence of committing to a platform in line with their charter, why in all the months of WPDU engagement, and pro-active raising of this with the ABC in dialogue and FOI process has the ABC not made any comment or suggestion in this area?

In the case of aview – to make matters worse for the ABC’s position, this App is on Github – with freely available and open source code for any developer to create a replica or similar app based on the code

Is this a question of bias, discrimination – or just incompetence on their part? Is their reaction an emotional and defensive action against the sudden presence of an unofficial Windows Phone App, which met stream-only criteria, functioned well and made a mockery of A) their lack of proper assessment of Windows Phone as an App Platform and B) that one could be created with no budget, funding and just a single developer’s spare time. Where is the ABC’s prompt and obligatory action in the case of aview to protect their copyright holders licensed content from third party services?

How ironic that the ABC’s rebrand launched this month of “#ourABC” emphasises “the role it plays in all of our lives – it’s about inclusion, togetherness and connection.” – or so ABC PR would have us believe.

“Throw your arms around us”, the new #ourABC theme for 2014 would seem to Windows Phone users to be more of a cold shoulder, slap in the face and raised finger by ABC Australia to us all. It’s #ourABCToo Mark Scott.

Sheeds.

We will be putting the extra questions raised in this article to the ABC, in particular Rebecca Heap, for additional comment. If you wish to add your voice on the matter of iView/Windows Phone Apps by #ourABC – you can direct them to:

Rachel Fergus
Senior Publicist TV Marketing & Communications
T. 02 8333 5085 E. fergus.rachel@abc.net.au

20 Jun 03:20

Pathfinding and Local Avoidance for RPG/RTS Games using Unity

If you are making an RPG or RTS game, chances are that you will need to use some kind of pathfinding and/or local avoidance solution for the behaviour of the mobs. They will need to get around obstacles, avoid each other, find the shortest path to their target and properly surround it. They also need to do all of this without bouncing around, getting stuck in random places and behave as any good crowd of cows would:


tutorial_00.jpg

In this blog post I want to share my experience on how to achieve a result that is by no means perfect, but still really good, even for release. We'll talk about why I chose to use Unity's built in NavMesh system over other solutions and we will create an example scene, step by step. I will also show you a couple of tricks that I learned while doing this for my game. With all of that out of the way, let's get going.


Choosing a pathfinding library


A few words about my experiences with some of the pathfinding libraries out there.


Aron Granberg's A* Project


This is the first library that I tried to use, and it was good. When I was doing the research for which library to use, this was the go-to solution for many people. I checked it out, it seemed to have pretty much everything needed for the very reasonable price of $99. There is also a free version, but it doesn't come with Local Avoidance, so it was no good.


Purchased it, integrated it into my project and it worked reasonably well. However, it had some key problems.


  1. Scene loading. It adds a solid chunk of time to your scene loading time. When I decided to get rid of A* and deleted all of its files from my project (after using it for 3 months), my scene loading time dropped to 1-2 seconds, up from 5-10 seconds when I press "Play". It's a pretty dramatic difference.
  2. RVO Local Avoidance. Although it's one of the library's strong points, it still had issues. For example, mobs were getting randomly stuck in places where they should be able to get through, also around corners, and stuff like that. I'm sure there is a setting somewhere buried, but I just could not get it right and it drove me nuts. The good part about the local avoidance in this library is that it uses the RVO algorithm and the behaviour of the agents in a large crowd was flawless. They would never go through one another or intersect. But when you put them in an environment with walls and corners, it gets bad.
  3. Licensing issues. However the biggest problem of the library since a month ago is that it doesn't have any local avoidance anymore (I bet you didn't see that one coming). After checking out the Aron Granberg's forums one day, I saw that due to licensing claims by the UNC (University of North Carolina), which apparently owned the copyright for the RVO algorithm, he was asked to remove RVO from the library or pay licensing fees. Sad.

UnitySteer


Free and open source, but I just could not get this thing to work. I'm sure it's good, it looks good on the demos and videos, but I'm guessing it's for a bit more advanced users and I would stay away from it for a while. Just my two cents on this library.


Unity's built in NavMesh navigation


While looking for a replacement for A* I decided to try out Unity's built in navigation system. Note - it used to be a Unity Pro only feature, but it got added to the free version some time in late 2013, I don't know when exactly. Correct me if I'm wrong on this one. Let me explain the good and bad sides of this library, according to my experience up to this point.


The Good

It's quick. Like properly quick. I can easily support 2 to 3 times more agents in my scene, without the pathfinding starting to lag (meaning that the paths take too long to update) and without getting FPS issues due to the local avoidance I believe. I ended up limiting the number of agents to 100, just because they fill the screen and there is no point in having more.


Easy to setup. It's really easy to get this thing to work properly. You can actually make it work with one line of code only:


agent.destination = target.position;

Besides generating the navmesh itself (which is two clicks) and adding the NavMeshAgent component to the agents (default settings), that's really all you need to write to get it going. And for that, I recommend this library to people with little or no experience with this stuff.


Good pathfinding quality. What I mean by that is agents don't get stuck anywhere and don't have any problem moving in tight spaces. Put simply, it works like it should. Also, the paths that are generated are really smooth and don't need extra work like smoothing or funnelling.


The Bad

Not the best local avoidance. It's slightly worse than RVO, but nothing to be terribly worried about, at least in my opinion and for the purposes of an ARPG game. The problem comes out when you have a large crowd of agents - something like 100. They might intersect occasionally, and start jiggling around. Fortunately, I found a nice trick to fix the jiggling issue, which I will share in the example below. I don't have a solution to the intersecting yet, but it's not much of a problem anyway.


That sums up pretty much everything that I wanted to say about the different pathfinding solutions out there. Bottom line - stick with NavMesh, it's good for an RPG or RTS game, it's easy to set up and it's free.


Example project


In this section I will explain step by step how to create an example scene, which should give you everything you need for your game. I will attach the Unity project for this example at the end of the post.


Creating a test scene


Start by making a plane and set its scale to 10. Throw some boxes and cylinders around, maybe even add a second floor. As for the camera, position it anywhere you like to get a nice view of the scene. The camera will be static and we will add point and click functionality to our character to make him move around. Here is the scene that I will be using:


tutorial_01.jpg


Next, create an empty object, position it at (0, 0, 0) and name it "player". Create a default sized cylinder, make it a child of the "player" object and set its position to (0, 1, 0). Create also a small box in front of the cylinder and make it a child of "player". This will indicate the rotation of the object. I have given the cylinder and the box a red material to stand out from the mobs. Since the cylinder is 2 units high by default, we position it at 1 on the Y axis to sit exactly on the ground plane:


tutorial_02.jpg

We will also need an enemy, so just duplicate player object and name it "enemy".


tutorial_03.jpg

Finally, group everything appropriately and make the "enemy" game object into a prefab by dragging it to the project window.


tutorial_04.jpg

Generating the NavMesh


Select all obstacles and the ground and make them static by clicking the "Static" checkbox in the Inspector window.


tutorial_05.jpg

Go to Window -> Navigation to display the Navigation window and press the "Bake" button at the bottom:


tutorial_06.jpg

Your scene view should update with the generated NavMesh:


tutorial_07.jpg

The default settings should work just fine, but for demo purposes let's add some more detail to the navmesh to better hug the geometry of our scene. Click the "Bake" tab in the Navigation window and lower the "Radius" value from 0.5 to 0.2:


tutorial_08.jpg

Now the navmesh describes our scene much more accurately:


tutorial_09.jpg

I recommend checking out the Unity Manual here to find out what each of the settings do.


However, we are not quite done yet. If we enter wireframe mode we will see a problem:


tutorial_09_01.jpg

There are pieces of the navigation mesh inside each obstacle, which will be an issue later, so let's fix it.


  1. Create an empty game object and name it "obstacles".
  2. Make it a child of the "environment" object and set its coordinates to (0, 0, 0).
  3. Select all objects which are an obstacle and duplicate them.
  4. Make them children of the new "obstacles" object.
  5. Set the coordinates of the "obstacles" object to (0, 1, 0).
  6. Select the old obstacles, which are still direct children of environment and turn off the Static checkbox.
  7. Bake the mesh again.
  8. Select the "obstacles" game object and disable it by clicking the checkbox next to its name in the Inspector window. Remember to activate it again if you need to Bake again.

Looking better now:


tutorial_09_02.jpg

Note:  If you download the Unity project for this example you will see that the "ground" object is actually imported, instead of a plane primitive. Because of the way that I initially put down the boxes, I was having the same issue with the navmesh below the second floor. Since I couldn't move that box up like the others (because it would also move the second floor up), I had to take the scene to Maya and simply cut the part of the floor below the second floor. I will link the script that I used to export from Unity to .obj at the end of the article. Generally you should use separate geometry for generating a NavMesh and for rendering.


Here is how the scene hierarchy looks like after this small hack:

tutorial_09_03.jpg

Point and click


It's time to make our character move and navigate around the obstacles by adding point and click functionality to the "player" object. Before we begin, you should delete all capsule and box colliders on the "player" and "enemy" objects, as well as from the obstacles (but not the ground) since we don't need them for anything.


Start by adding a NavMeshAgent component to the "player" game object. Then create a new C# script called "playerMovement" and add it to the player as well. In this script we will need a reference to the NavMeshAgent component. Here is how the script and game object should look like:


using UnityEngine;
using System.Collections;

public class playerMovement : MonoBehaviour {
	
  NavMeshAgent agent;

  void Start () {
    agent = GetComponent< NavMeshAgent >();
  }

  void Update () {

  }
}

tutorial_10.jpg

Now to make the character move, we need to set its destination wherever we click on the ground. To determine where on the ground the player has clicked, we need to first get the location of the mouse on the screen, cast a ray towards the ground and look for collision. The location of the collision is the destination of the character.


However, we want to only detect collisions with the ground and not with any of the obstacles or any other objects. To do that, we will create a new layer "ground" and add all ground objects to that layer. In the example scene, it's the plane and 4 of the boxes.


Note:  If you are importing the .unitypackage from this example, you still need to setup the layers!


Here is the script so far:


using UnityEngine;
using System.Collections;

public class playerMovement : MonoBehaviour {
	
  NavMeshAgent agent;

  void Start () {
    agent = GetComponent< NavMeshAgent >();
  }

  void Update () {
    if (Input.GetMouseButtonDown(0)) {
      // ScreenPointToRay() takes a location on the screen
      // and returns a ray perpendicular to the viewport
      // starting from that location
      Ray ray = Camera.main.ScreenPointToRay(Input.mousePosition);
      RaycastHit hit;
      // Note that "11" represents the number of the "ground"
      // layer in my project. It might be different in yours!
      LayerMask mask = 1 < 11;
      
      // Cast the ray and look for a collision
      if (Physics.Raycast(ray, out hit, 200, mask)) {
        // If we detect a collision with the ground, 
        // tell the agent to move to that location
        agent.destination = hit.point;
      }
    }
  }
}

Now press "Play" and click somewhere on the ground. The character should go there, while avoiding the obstacles along the way.


tutorial_11.jpg

If it's not working, try increasing the ray cast distance in the Physics.Raycast() function (it's 200 in this example) or deleting the mask argument from the same function. If you delete the mask it will detect collisions with all boxes, but you will at least know if that was the problem.


If you want to see the actual path that the character is following, select the "player" game object and open the Navigation window.


Make the agent follow the character


  1. Repeat the same process as we did for the "player" object - attach a NavMeshAgent and a new script called "enemyMovement".
  2. To get the player's position, we will also add a reference to the "player" object, so we create a public Transform variable. Remember to go back in the Editor connect the "player" object to that variable.
  3. In the Update() method set the agent's destination to be equal to the player's position.

Here is the script so far:



using UnityEngine;
using System.Collections;

public class enemyMovement : MonoBehaviour {
	
  public Transform player;
  NavMeshAgent agent;

  void Start () {
    agent = GetComponent< NavMeshAgent >();
  }

  void Update () {
    agent.destination = player.position;
  }
}

Press "Play" and you should see something like the following screenshot. Again, if you want to show the path of the enemy object, you need to select it and open the Navigation window.


tutorial_12.jpg

However, there are a few things that need fixing.

  • First, set the player's move speed to 6 and the enemy's speed to 4. You can do that from the NavMeshAgent component.
  • Next, we want the enemy to stop at a certain distance from the player instead of trying to get to his exact location. Select the "enemy" object and on the NavMeshAgent component set the "Arrival Distance" to 2. This could also represent the mob's attack range.
  • The last problem is that generally we want the enemies to body block our character so he can get surrounded. Right now, our character can push the enemy around. As a temporary solution, select the "enemy" object and on the NavMeshAgent component change the "Avoidance Priority" to 30.

Here is what the docs say about Avoidance Priority:


When the agent is performing avoidance, agents of lower priority are ignored. The valid range is from 0 to 99 where: Most important = 0. Least important = 99. Default = 50.


By setting the priority of the "enemy" to 30 we are basically saying that enemies are more important and the player can't push them around. However, this fix won't work so well if you have 50 agents for example and I will show you a better way to fix this later.


tutorial_13_vid.gif

Making a crowd of agents


Now let's make this a bit more fun and add, let's say 100 agents to the scene. Instead of copying and pasting the "enemy" object, we will make a script that instantiates X number of enemies within a certain radius and make sure that they always spawn on the grid, instead of inside a wall.


Create an empty game object, name it "spawner" and position it somewhere in the scene. Create a new C# script called "enemySpawner" and add it to the object. Open enemySpawner.cs and add a few public variables - one type int for the number of enemies that we want to instantiate, one reference of type GameObject to the "enemy" prefab, and one type float for the radius in which to spawn the agents. And one more - a reference to the "player" object.


using UnityEngine;
using System.Collections;

public class enemySpawner : MonoBehaviour {
	
  public float spawnRadius = 10;
  public int numberOfAgents = 50;
  public GameObject enemyPrefab;
  public Transform player;

  void Start () {

  }
}

At this point we can delete the "enemy" object from the scene (make sure you have it as a prefab) and link the prefab to the "spawner" script. Also link the "player" object to the "player" variable of the "spawner".


To make our life easier we will visualise the radius inside the Editor. Here is how:


using UnityEngine;
using System.Collections;

public class enemySpawner : MonoBehaviour {
	
  public float spawnRadius = 10;
  public int numberOfAgents = 50;
  public GameObject enemyPrefab;
  public Transform player;

  void Start () {

  }

  void OnDrawGizmosSelected () {
    Gizmos.color = Color.green;
    Gizmos.DrawWireSphere (transform.position, spawnRadius);
  }
}

OnDrawGizmosSelected() is a function just like OnGUI() that gets called automatically and allows you to use the Gizmos class to draw stuff in the Editor. Very useful! Now if you go back to the Editor, select the "spawner" object and adjust the spawnRadius variable if needed. Make sure that the centre of the object sits as close to the floor as possible to avoid spawning agents on top of one of the boxes.


tutorial_14.jpg

In the Start() function we will spawn all enemies at once. Not the best way to approach this, but will work for the purposes of this example. Here is what the code looks like:


using UnityEngine;
using System.Collections;

public class enemySpawner : MonoBehaviour {
	
  public float spawnRadius = 10;
  public int numberOfAgents = 50;
  public GameObject enemyPrefab;
  public Transform player;

  void Start () {
    for (int i=0; i < numberOfAgents; i++) {
      // Choose a random location within the spawnRadius
      Vector2 randomLoc2d = Random.insideUnitCircle * spawnRadius;
      Vector3 randomLoc3d = new Vector3(transform.position.x + randomLoc2d.x, transform.position.y, transform.position.z + randomLoc2d.y);
      
      // Make sure the location is on the NavMesh
      NavMeshHit hit;
      if (NavMesh.SamplePosition(randomLoc3d, out hit, 100, 1)) {
        randomLoc3d = hit.position;
      }
      
      // Instantiate and make the enemy a child of this object
      GameObject o = (GameObject)Instantiate(enemyPrefab, randomLoc3d, transform.rotation);
      o.GetComponent< enemyMovement >().player = player;
    }
  }

  void OnDrawGizmosSelected () {
    Gizmos.color = Color.green;
    Gizmos.DrawWireSphere (transform.position, spawnRadius);
  }
}

The most important line in this script is the function NavMesh.SamplePosition(). It's a really cool and useful function. Basically you give it a coordinate it returns the closest point on the navmesh to that coordinate. Consider this example - if you have a treasure chest in your scene that explodes with loot and gold in all directions, you don't want some of the player's loot to go into a wall. Ever. You could use NavMesh.SamplePosition() to make sure that each randomly generated location sits on the navmesh. Here is a visual representation of what I just tried to explain:


tutorial_15_vid.gif

In the video above I have an empty object which does this:


void OnDrawGizmos () {
  NavMeshHit hit;
  if (NavMesh.SamplePosition(transform.position, out hit, 100.0f, 1)) {
    Gizmos.DrawCube(hit.position, new Vector3 (2, 2, 2));
}

Back to our example, we just made our spawner and we can spawn any number of enemies, in a specific area. Let's see the result with 100 enemies:


tutorial_16_vid.gif

Improving the agents behavior


What we have so far is nice, but there are still things that need fixing.


To recap, in an RPG or RTS game we want the enemies to get in attack range of the player and stop there. The enemies which are not in range are supposed to find a way around those who are already attacking to reach the player. However here is what happens now:


tutorial_17_vid.gif

In the video above the mobs are stopping when they get into attack range, which is the NavMeshAgent's "Arrival Distance" parameter, which we set to 2. However, the enemies who are still not in range are pushing the others from behind, which leads to all mobs pushing the player as well. We tried to fix this by setting the mobs' avoidance priority to 30, but it doesn't work so well if we have a big crowd of mobs. It's an easy fix, here is what you need to do:


  1. Set the avoidance priority back to 30 on the "enemy" prefab.
  2. Add a NavMeshObstacle component to the "enemy" prefab.
  3. Modify the enemyMovement.cs file as follows:

using UnityEngine;
using System.Collections;

public class enemyMovement : MonoBehaviour {
	
  public Transform player;
  NavMeshAgent agent;
  NavMeshObstacle obstacle;

  void Start () {
    agent = GetComponent< NavMeshAgent >();
    obstacle = GetComponent< NavMeshObstacle >();
  }

  void Update () {
    agent.destination = player.position;
    
    // Test if the distance between the agent and the player
    // is less than the attack range (or the stoppingDistance parameter)
    if ((player.position - transform.position).sqrMagnitude < Mathf.Pow(agent.stoppingDistance, 2)) {
      // If the agent is in attack range, become an obstacle and
      // disable the NavMeshAgent component
      obstacle.enabled = true;
      agent.enabled = false;
    } else {
      // If we are not in range, become an agent again
      obstacle.enabled = false;
      agent.enabled = true;
    }
  }
}

Basically what we are doing is this - if we have an agent which is in attack range, we want him to stay in one place, so we make him an obstacle by enabling the NavMeshObstacle component and disabling the NavMeshAgent component. This prevents the other agents to push around those who are in attack range and makes sure that the player can't push them around either, so he is body blocked and can't run away. Here is what it looks like after the fix:


tutorial_18_vid.gif

It's looking really good right now, but there is one last thing that we need to take care of. Let's have a closer look:


tutorial_19_vid.gif

This is the "jiggling" that I was referring to earlier. I'm sure that there are multiple ways to fix this, but this is how I approached this problem and it worked quite well for my game.


  1. Drag the "enemy" prefab back to the scene and position it at (0, 0, 0).
  2. Create an empty game object, name it "pathfindingProxy", make it a child of "enemy" and position it at (0, 0, 0).
  3. Delete the NavMeshAgent and NavMeshObstacle components from the "enemy" object and add them to "pathfindingProxy".
  4. Create another empty game object, name it "model", make it a child of "enemy" and position it at (0, 0, 0).
  5. Make the cylinder and the cube children of the "model" object.
  6. Apply the changes to the prefab.

This is how the "enemy" object should look like:


tutorial_20.jpg

What we need to do now is to use the "pathfindingProxy" object to do the pathfinding for us, and use it to move around the "model" object after it, while smoothing the motion. Modify enemyMovement.cs like this:


using UnityEngine;
using System.Collections;

public class enemyMovement : MonoBehaviour {

  public Transform player;
  public Transform model;
  public Transform proxy;
  NavMeshAgent agent;
  NavMeshObstacle obstacle;

  void Start () {
    agent = proxy.GetComponent< NavMeshAgent >();
    obstacle = proxy.GetComponent< NavMeshObstacle >();
  }

  void Update () {
    // Test if the distance between the agent (which is now the proxy) and the player
    // is less than the attack range (or the stoppingDistance parameter)
    if ((player.position - proxy.position).sqrMagnitude < Mathf.Pow(agent.stoppingDistance, 2)) {
      // If the agent is in attack range, become an obstacle and
      // disable the NavMeshAgent component
      obstacle.enabled = true;
      agent.enabled = false;
    } else {
      // If we are not in range, become an agent again
      obstacle.enabled = false;
      agent.enabled = true;
      
      // And move to the player's position
      agent.destination = player.position;
    }
        
    model.position = Vector3.Lerp(model.position, proxy.position, Time.deltaTime * 2);
    model.rotation = proxy.rotation;
  }
}

First, remember to connect the public variables "model" and "proxy" to the corresponding game objects, apply the changes to the prefab and delete it from the scene.


So here is what is happening in this script. We are no longer using transform.position to check for the distance between the mob and the player. We use proxy.position, because only the proxy and the model are moving, while the root object stays at (0, 0, 0). I also moved the agent.destination = player.position; line in the else statement for two reasons: Setting the destination of the agent will make it active again and we don't want that to happen if it's in attacking range. And second, we don't want the game to be calculating a path to the player if we are already in range. It's just not optimal. Finally with these two lines of code:


	model.position = Vector3.Lerp(model.position, proxy.position, Time.deltaTime * 2);
	model.rotation = proxy.rotation;

We are setting the model.position to be equal to proxy.position, and we are using Vector3.Lerp() to smoothly transition to the new position. The "2" constant in the last parameter is completely arbitrary, set it to whatever looks good. It controls how quickly the interpolation occurs, or said otherwise, the acceleration. Finally, we just copy the rotation of the proxy and apply it to the model.


Since we introduced acceleration on the "model" object, we don't need the acceleration on the "proxy" object. Go to the NavMeshAgent component and set the acceleration to something stupid like 9999. We want the proxy to reach maximum velocity instantly, while the model slowly accelerates.


This is the result after the fix:


tutorial_21_vid1.gif

And here I have visualized the path of one of the agents. The path of the proxy is in red, and the smoothed path by the model is in green. You can see how the bumps and movement spikes are eliminated by the Vector3.Lerp() function:


tutorial_221.jpg

Of course that path smoothing comes at a small cost - the agents will intersect a bit more, but I think it's totally fine and worth the tradeoff, since it will be barely noticeable with character models and so on. Also the intersecting tends to occur only if you have something like 50-100 agents or more, which is an extreme case scenario in most games.


We keep improving the behavior of the agents, but there is one last thing that I'd like to show you how to fix. It's the rotation of the agents. Right now we are modifying the proxy's path, but we are copying its exact rotation. Which means that the agent might be looking in one direction, but moving in a slightly different direction. What we need to do is rotate the "model" object according to its own velocity, rather than using the proxy's velocity. Here is the final version of enemyMovement.cs:



using UnityEngine;
using System.Collections;

public class enemyMovement : MonoBehaviour {

  public Transform player;
  public Transform model;
  public Transform proxy;
  NavMeshAgent agent;
  NavMeshObstacle obstacle;
  Vector3 lastPosition;

  void Start () {
    agent = proxy.GetComponent< NavMeshAgent >();
    obstacle = proxy.GetComponent< NavMeshObstacle >();
  }

  void Update () {
    // Test if the distance between the agent (which is now the proxy) and the player
    // is less than the attack range (or the stoppingDistance parameter)
    if ((player.position - proxy.position).sqrMagnitude < Mathf.Pow(agent.stoppingDistance, 2)) {
      // If the agent is in attack range, become an obstacle and
      // disable the NavMeshAgent component
      obstacle.enabled = true;
      agent.enabled = false;
    } else {
      // If we are not in range, become an agent again
      obstacle.enabled = false;
      agent.enabled = true;
      
      // And move to the player's position
      agent.destination = player.position;
    }
        
    model.position = Vector3.Lerp(model.position, proxy.position, Time.deltaTime * 2);

    // Calculate the orientation based on the velocity of the agent
    Vector3 orientation = model.position - lastPosition;
    
    // Check if the agent has some minimal velocity
    if (orientation.sqrMagnitude > 0.1f) {
      // We don't want him to look up or down
      orientation.y = 0;
      // Use Quaternion.LookRotation() to set the model's new rotation and smooth the transition with Quaternion.Lerp();
      model.rotation = Quaternion.Lerp(model.rotation, Quaternion.LookRotation(model.position - lastPosition), Time.deltaTime * 8);
    } else {
      // If the agent is stationary we tell him to assume the proxy's rotation
      model.rotation = Quaternion.Lerp(model.rotation, Quaternion.LookRotation(proxy.forward), Time.deltaTime * 8);
    }
    
    // This is needed to calculate the orientation in the next frame
    lastPosition = model.position;
  }
}

At this point we are good to go. Check out the final result with 200 agents:


tutorial_23_vid1.gif

Final words


This is pretty much everything that I wanted to cover in this article, I hope you liked it and learned something new. There are also lots of improvements that could be made to this project (especially with Unity Pro), but this article should give you a solid starting point for your game.


Originally posted to http://blackwindgames.com/blog/pathfinding-and-local-avoidance-for-rts-rpg-game-with-unity/
18 Jun 10:48

Lessons in insecure SSL courtesy of Hoyts cinemas

by Troy Hunt

Why do we bother with SSL? I mean what’s the risk that we’re trying to protect against by using certificate authorities and serving up traffic over HTTPS? Usually it’s men (or possibly even women) in the middle or in other words, someone sitting somewhere between the client and the server and getting their hands on the data. Do we all agree with this? Yes? Good, then why on earth would you possibly say this?

@slaneyrw @troyhunt Hi Robert, thanks for reaching us! We can confirm that all payment details are all sent via a API which is secured.

This was in response to Robert kindly pointing out that their payment screen is not secured. Robert, of course, is entirely correct:

Hoyts payment screen loaded over HTTP

Whoa – no padlock in the address bar! Oh no, wait, there it is down in the bottom left corner, let me enhance that for you:

Thawte padlock icon

This is what we refer to as “Security Theatre” and it’s the fake boobs of web security – sure it looks enticing from the outside but it offers no real substance. In fact here’s what Thawte tells us about web security when you click through the logo:

Authentic Sites use thawte SSL Web Server Wildcard Certificates to offer secure communications by encrypting all data to and from the site.

Encrypting all data? Hang on a moment – is the logo lying or has Hoyts got it wrong?! Well firstly, these logos are completely useless and as I’ve said before, they must die. In fact just last year I demonstrated how using the rationale presented by the likes of Thawte and Norton, I am, in fact, the world’s greatest lover. But I digress.

Now I’ve played the “You’re-not-secure, Yes-we-are-because-we-post-over-HTTPS, No-you’re-not-really” game many, many times before so I knew exactly how this was going to pan out. After further to and fro it inevitably boils down to the accused defending the indefensible because frankly, they just don’t know any better:

@devlead @kyhwana @troyhunt @slaneyrw Its a single page javascript app the content is HTTP, but all client data goes over an SSL web service

Here’s what Hoyts is referring to and I’ll load this up in Chrome on the desktop to make things easy:

The payment form loaded over HTTP but posting to HTTPS

Yes, this is sent over HTTPS and yes, the credit card payload is encrypted. No, it is not secure. Let me demonstrate with a story.

Bruce is out with the wife and kids doing some shopping one afternoon. As many of us know, this is laborious work. He sits down at a cafe, grabs a coffee and jumps on the free wifi. He decides to see what’s on at the flicks so jumps over to Hoyts and sees Transformers: Age of Extinction. As we also know, watching Transformers is more fun than shopping with wives so he decides to order some tickets online. Now Bruce is a smart bloke and he knows that public wifi in particular is a high-risk zone when it comes to protecting the transport layer but being Hoyts – arguably one of our premier cinemas down here – he figures they’ve done their job right and protected him from rogue (wo)men in the middle. Besides – they’ve got a Thawte logo! He jumps over to the payment screen and sees this:

iPhone-5

Ok, that’s a little odd to ask for his Mrs’ credit card info but given she’s making all the shopping decisions today anyway, he figures that’s fair enough (it might be one of those “smart” websites that knows who holds the purse strings). He enters his credit card info, submits the form and…

POST http://hackyourselffirst.troyhunt.com/OurApiIsSecure HTTP/1.1
Host: hackyourselffirst.troyhunt.com
Accept-Language: en-us
User-Agent: Mozilla/5.0 (iPhone; CPU iPhone OS 7_1_1 like Mac OS X) AppleWebKit/537.51.2 (KHTML, like Gecko) Version/7.0 Mobile/11D201 Safari/9537.53
Accept: application/json, text/javascript, */*; q=0.01
Referer: http://m.hoyts.com.au/
Content-Type: application/json
Connection: keep-alive
Connection: keep-alive
Authorization: Bearer AAEAAAvjgLClEirdUF180lVXbHy_OhSQDxCbTeYu9pavG-NbnpNw5bnMcaWNfSwsllxis00ZdNDgxst06OMRyPumU_4WS-ep-lhMOyPTYsRuB6_EF77MqOC6O5jhAYxxMNM7YgbsfsuvpRRNF91xtW5WsgTX0f2Vh7-Ku8Fct6SMLRf3LFKkq1qzqTMw415HyTMJyPKMdT___EBYwi4gsAwFWjj6vai4kBmLJaKN2bLaiiiAmioMgKt_j0l8AXXiIQGssp-8oz3zeua5jcrct511l1CNn4SblJvvLAi9SYokzMJ5xRVLYR4mR9ysspsHi-vXsW3E9fO1iO36xPoi3YZEUV10AQAAAAEAACjJE4WRlW9Tutc-WOk6YTWZPyFIbHEBzMKAmYuLfm3B3RlApptawuNVTwma5AYfEOLrNrXU-FyB7Mfum1lwYWyPbNU-PRHXnw2VKiFu_-unvnhmJv-XthUjrsewQNFGXT2Eskc-JlFQlJ0LoYLYc2Kw4dSs4z8yjnyO9x5q8Vv79J55bzG7vIf-oF18JjNTojbn39O78LCP4ml_omjIawEv4OIHXvyPLnqyw38-qSGDmJ_CcDnKrMYG25ebgblm0ymMtkr7FOldFLMRw6YfkbGrs0baiH6esnvDM8YVteqCB4J23LDIt5p-ZA7KxfHY6Q8Hsdio5uqNZOJkJU7Gf-3ngQaC8m2AQRoBFSbMiuoXtONh9toVjpYjxRUEC_MkUmv7QKZqZUQZFyxh-CDUwA7Wzupm0TtyI1eZSCRtI2_PJcc4bCd5e30PWOTmhRTgInlsB4lIo8YZYoMwHRsfkCEuE73dTn5PMvgZADSwv5wm
Content-Length: 200
Origin: http://m.hoyts.com.au
Accept-Encoding: gzip, deflate

{"CreditCardDetails":{"CardType":"Amex","Number":"378282246310005","Name":"Shirl Smit","CVC":"123","ExpiryMonth":"07","ExpiryYear":"2016","Email":"bruce@gmail.com"},"orderId":"JOH8uMo5PEKTEZt7uOkykA"}

This is just a raw HTTP request as there’s nothing worth showing on the screen and it doesn’t matter anyway – Bruce is already stuffed. What’s happened here is fundamentally simple which makes it all the more worrying that it’s a problem in the first place. Bruce was indeed connected to a rogue wifi network and the attacker had access to his traffic – precisely the risk Hoyts was trying to avoid by adding SSL to the site in the first place! However, because they loaded the payment card page over an insecure connection, the attacker could easily modify it.

That should be about it and normally I’d just leave it there – you loaded your form insecurely and put your customers at risk before they even got to entering their sensitive data. But Hoyts is a little complicated insofar as it’s all a bit “single-page-appy” and it wasn’t just a case of changing the action attribute of the form in HTML. Here’s the mechanics of it:

When m.hoyts.com.au loads over HTTP, it embeds the file built.js which is also loaded over HTTP. This is what’s colloquially referred to as “The Mother of all JavaScript Files”. It’s minified (which is good), and it bundles multiple libraries (which is also good) but it’s still a massive 967kb and isn’t HTTP compressed (stick that in your mobile 3G connection!) For context, that comes out at about 33k lines unminified. Anyway, all the orchestration of all the features happens in here and because it’s loaded over HTTP, a man in the middle (excuse me while I drop the gender equality for the sake of brevity) can easily manipulate the contents.

There are many ways of modifying or observing unencrypted traffic. The Tunisian were fond of doing it at the ISP level. Firesheep made good use of open public wifi and networks cards that can run in monitor mode. I myself have been known to carry a Pineapple (for research, of course) which is a particularly capable way of getting into the middle of traffic sent by mobile devices. For demos like this though, I like to use Fiddler.

Whilst you’re not (usually) going to be MitM’ing anyone for real with Fiddler, it makes it very easy to demonstrate the potential and it makes it particularly easy for others to reproduce the process (Hoyts did seem appreciative of the feedback so assumedly it will be in their interest to be able to reproduce this). To demo this via Fiddler, I simply added the following piece of Fiddler script to the OnBeforeResponse event using FiddlerScript:

if (oSession.oResponse.headers.ExistsAndContains("Content-Type", "application/x-javascript")) {
  oSession.utilDecodeResponse();
  oSession.utilReplaceInResponse('"Credit Card Number"', '"Your Wife\\\'s Credit Card Number"');
  oSession.utilReplaceInResponse('"Credit Holder\\\'s Name"', '"Your Wife\\\'s Name"');
  oSession.utilReplaceInResponse('this.getBaseUrl()+"/orders/"+this.attributes.orderId+"/pay/"',
      '"http://hackyourselffirst.troyhunt.com/OurApiIsSecure"');
}

if (oSession.HTTPMethodIs("OPTIONS") && oSession.HostnameIs("hackyourselffirst.troyhunt.com")) {
  oSession.oResponse.headers.Add("Access-Control-Allow-Origin", "*");
  oSession.oResponse.headers.Add("Access-Control-Allow-Headers", "authorization,content-type");
}

This is a pretty simple little script but let’s go through it anyway. There are two main parts to this with the first block of code being all about replacing response contents when the content type is JavaScript. This is essentially modifying the built.js file which is easily achievable because it’s loaded over HTTP. A real MitM could also do this on the fly, or store their own modified copy and serve that up or even just route the request to their own service. You can slice it and dice it many ways, but my script is simply replacing three parts of the file:

  1. Changing “Credit Card Number” to “Your Wife’s Credit Card Number” (this is just to give a visual representation within the phone of the traffic having been modified)
  2. Changing “Credit Holder’s Name” to “Your Wife’s Name” (same reason again)
  3. Changing the path the form ultimately posts to (this is the heart of the attack)

That’s the first part. The second part adds two headers which facilitate the cross domain async posting of the JSON data which is how the card info is sent. This is required as when posting cross domain in this fashion, the browser will make an OPTIONS request before sending over the payload and if the target server doesn’t respond by saying it will allow the request by virtue of these headers being present, the data is never sent.

So that’s it – the only other thing I needed was a resource on the hackyourselffirst.troyhunt.com domain that would respond with an empty HTTP 200 when an OPTIONS request was made and another which would accept the POST request. Here’s how it looks when the request is actually sent:

The request with credit card info in Fiddler

Of course the key here is the message in the bottom right – this is my site (or the would-be attacker’s site) successfully reporting that it’s received the victim’s credit card info and reflecting it back out to the screen. You don’t see it in the mobile app as it’s expecting a JSON response with entirely different content and in reality, an attacker would be far smarter about hiding this information away and ensuring the end user has no apparent disruption in their service.

All of this boils down to the simplest of solutions – the payment page must load everything over HTTPS. Ah, but what about the pages linking to the payment page, couldn’t they be manipulated to link elsewhere? Yes, (although the certificate on the payment page would then be wrong if anyone looked at it) and if you’re worried about that then you load them over HTTPS too. Well what about if someone just enters m.hoyts.com.au into their browser and the site then redirects to HTTPS, isn’t that first HTTP request a risk? Yes, and we don’t have good answers for it yet short of using HSTS headers so that at some point, if the site is loaded securely, the browser can’t then make an insecure request later on. It’s not a perfect science, but it sure beats loading forms collecting sensitive data over insecure connections.

Particularly in the wake of events such as Target last year where 110M credit cards went walkabout, it’s somewhat mindboggling to see such a fundamental error when handling payment card info. In fact there’s a good chance that PCI DSS wouldn’t take a very fond view of this and they have bit of a track record of taking merchants to task over sloppy practices when handling their card data. Certainly as a customer, this is not a page you want to be trusting your card data with any more than what you would any site that didn’t have the prerequisite padlock in the address bar and HTTPS in the URL.

Lastly, some advice for the social media drivers behind public faces such as the Hoyts Twitter account: if a concerned customer is raising security concerns and particularly if they’re echoed by a bunch of folks with titles like “technical architect”, “developer” and “hacker”, it might actually be worth thinking about some of those fundamental tenets of dealing with technical queries and taking the discussion offline for further investigation. Public statements of security prowess in the face of clear and present vulnerabilities often result in someone taking you to task over it :)

Update: Several days after posting this, the Hoyts website at http://m.hoyts.com.au was updated so that every HTTP request is redirected to HTTPS therefore causing the page that accepts financial data to be loaded over a secure connection. Good one Hoyts!

28 May 01:15

Makerbot Patents twist the knife on open source 3D Printing roots, community responds.

by RichRap
Something is leaving a bad smell in the Open source community, could it be the recent Patents of big cheese Bre at Makerbot.

It's always very sad when you see a company or group turn their back on whatever community or project that helped make them successful.

Makerbot seem to be traveling down a darker and darker road as time goes on. I'm not sure if they felt invincible or that no one would notice but a recent patent frenzy by Makerbot has angered the open source 3D printing community. And we don't yet know what more is to come.

I prefer to use this blog to document work and talk about good things, so this is really a summary of past and recent events and links for you to find out more and make your own mind up about Makerbot's actions and community reaction to these disappointing and Machiavellian maneuvers.

And before we go on, I am not against Patents, they can be used for good to protect ideas and costly investments into research, that for some is a necessity to fuel sales and marketing of a product or invention. I don't think they are very fair and certainly not appropriate in many (most) cases, but they are not going away any time soon. For this trade off of hard work you can exploit an idea under a patent for a set period whilst also disclosing information about the invention and use.

Yes, RepRap and many open-source developments were built on expired patents. That's how it's supposed to work. A company gets 25 years to exploit it's patented invention, if after that time the market demand is still there and competitors can build a better mouse trap, then competition usually kick-starts another industry cycle and one could argue the original patented idea was not exploited well enough or the market was not satisfied with just one source of mouse trap.

What angers individuals is when ideas and developments that were in the public domain, very often with a specific open-source license are taken and patented by a company so others automatically now infringe. (or could be accused to infringe).

Patent trolling or this is abuse of the patent system is about as insulting as you can get to a community that encourages open innovation and sharing of open source developments.

Companies that can afford to mop-up open innovation into hundreds of patents, loosely based on ideas common in an industry or sector often have the power to legally threaten anyone that becomes a competitor. Using this abuse of the patent system can tie up smaller companies and stop new developments from even starting due to this sword of damocles handing over head.

Many things are just not worth patenting for companies or individuals, due to the cost and return possible along with the ability to be able to defend your patent if required. But if these seemingly insignificant ideas are patented by bigger companies who can afford the process, as a way to expand their patent portfolio, even if they were not 'invented' they take on another form of power.

Makerbot did 'invent' and patent an automated build platform, this was a commercial failure and proves the point not all ideas are worth patenting. Also the idea of an automatic build platform was openly discussed by various people when I first joined RepRap, long before the Makerbot patent, the difference here being that no one found it useful, practical or in any way essential, so the news Makerbot patented it was just not such of an issue.

Recent turbulent Thingiverse/Makerbot history - 
Back in September 2012 changes to the Thingiverse Terms of Service along with various concerns that Makerbot was moving away from it's RepRap and Opensource roots led to an Occupy Thingiverse campaign.

This was exactly what Thingiverse looked like on 20th September 2012 -


It was the Same day Thingiverse decided to promote and feature my 3-way colour mixing extruder. (And it had already been on Thingiverse and my Blog for 28+ days before this) - See image below 'Created by RichRap 28 days ago'


And the very next day after on the 21st September 2012 they filed a patent for build material switching.

Along with my 3-way multi-input material extruder above, I also posted, documented and shared my simple solution using lengths of different materials to produce multi-coloured objects - that was back in June2011.

Makerbot want to patent various things to do with multicolour and multi-matrial mixing, much of which has been done and documented in the open source community (Even directly at Bath University - UK as early as Feb 2012 and mentioned on the RepRap Blog June 2012 - Myles corbett report) This, over the last few years and is a very active area of open-source 3D printer development for myself and many others.

Plenty of other patent applications are also being submitted, we can only see the ones that have been published, so who knows what other things are in the pipeline...

Many of the recent Makerbot patent applications can be seen and read on this site here - You can search for Makerbot, or click on this search link here 

Title - BUILD MATERIAL SWITCHING
Abstract - A three-dimensional printer uses transitional lengths of build material to facilitate changes from one color to another during a fabrication process, and more generally to achieve multi-color objects corresponding to color patterns on the surface of a three-dimensional model. The transitional lengths of build material may be positioned inside a fabricated object, such as for infill, or outside the fabricated object where undesirable aesthetic properties of the transitional lengths will not impact the desired distribution of colors on the surface of the fabricated object.

Title - COLOR SWITCHING FOR THREE-DIMENSIONAL PRINTING
Abstract - By reversing the direction of a first build material fed into an extruder, the first build material can be wholly or partially evacuated from the extruder before a second material is introduced. This approach mitigates transition artifacts and permits faster, more complete changes from one build material to another.

A founders Perspective - 
A day later after the 2012 Occupy Thingiverse - Zach Hoeken posted his view of the changes he observed to Makerbot here-


Makerbot Patent Rage Coverage - 

OpenbeamUSA has one of the best overviews of recent Makerbot patent troubles, it's well worth a read, and I don't need to repeat things here - Stay classy, Makerbot

This also contains the details of how you can file a prior art application to help block a patent that is based on someone else's work or in the public domain.

Another good summary of the Quick Release Extruder Patent Here - 

Hackaday.com covered - makerbot files patents internet goes crazy 

Fabbaloo Posted - Has MakerBot Crossed The Line? For Some, Yes


Edit:- Added more coverage -
3deee.ch Makerbot Vs Open Source

TechDirt - MakerBot Files For Patent On A Design Derived From Work By Its Community


Please discuss, share and the tag #TakerBot seems to be being used by people - 

Visual guide graphics below courtesy of Anthony Clark.





If you are now looking for a place to share, YouMagine is a very good starting point.

GitHub is also an option, and a simple method to export all your things from Thingiverse uses a simple Python script by Carlos García Saura - Look Here for info.

Repables is also looking good for content sharing, and is being further developed right now.

Ultimately work like the Thing Tracker Network, by Gary Hodgson may make it easier to link and share designs and models from almost any source, so don't stop sharing and keep it open source.
 If you see any other coverage on the Makerbot Patents, post links below. Thanks.

Other news - 

The 3D Printing Filament spool standard campaign is still going well, updates soon.

E3D Just released the V6 hot-end -I have had one for a while -  it's amazing !

++Lots++ of new things to share with you soon, new printers, extruders, developments and materials :) So long as people don't take them and patent the work as their own...


Rich.

16 Apr 11:18

Exploring ASP.NET Web Pages - A fully-featured MiniBlog using just Razor

by Scott Hanselman

ASP.NET "Razor" Web Pages are ASP.NET sites without models, views, controllers, or project files. Some folks say "oh, that's just Classic ASP, or PHP right? Not at all. It's the full power and speed of the .NET CLR, the full syntax of C#, LINQ, along with things like C# dynamics. It's super powerful, and my friend Mads and I are surprised more people don't use them for small things.

In fact, Rob Conery and I did the http://thisdeveloperslife.com web site using just Razor and Rob's "massive" micro-ORM. Later I made http://hanselminutes.com with Web Pages as well.

This blog runs DasBlog, an older ASP.NET 2.0 blogging engine I worked on with Clemens Vasters and a lot of co-contributors, but I'm actively checking on Mads' MiniBlog, a minimal but VERY competent blog engine using Razor Web Pages. Why wouldn't I use something like Ghost? I've thought about it, but MiniBlog is SO minimal and that makes it very attractive.

Here's some things I like about MiniBlog, as both a blog and a learning tool.

Minimal

It's not called Mini for fun. There's a truly minimal packages.config of dependencies:

<packages>

<package id="AjaxMin" version="5.2.5021.15814" targetFramework="net45" />
<package id="Microsoft.AspNet.Razor" version="3.0.0" targetFramework="net45" />
<package id="Microsoft.AspNet.WebPages" version="3.0.0" targetFramework="net45" />
<package id="Microsoft.Web.Infrastructure" version="1.0.0.0" targetFramework="net45" />
<package id="xmlrpcnet" version="3.0.0.266" targetFramework="net45" />
<package id="xmlrpcnet-server" version="3.0.0.266" targetFramework="net45" />
</packages>

Clean use of Handlers for Web Services

Blogs do more than just serve pages, there is also a need for RSS feeds, MetaWeblog Web Services for things like Windows Live Writer, and dynamic minification for JS and CSS.

<handlers>

<add name="CommentHandler" verb="*" type="CommentHandler" path="/comment.ashx"/>
<add name="PostHandler" verb="POST" type="PostHandler" path="/post.ashx"/>
<add name="MetaWebLogHandler" verb="POST,GET" type="MetaWeblogHandler" path="/metaweblog"/>
<add name="FeedHandler" verb="GET" type="FeedHandler" path="/feed/*"/>
<add name="CssHandler" verb="GET" type="MinifyHandler" path="*.css"/>
<add name="JsHandler" verb="GET" type="MinifyHandler" path="*.js"/>
</handlers>

MiniBlog uses .ashx (HttpHanders) and wires them up in web.config. RSS feeds are easily handled with System.ServiceModel.Syndication, even JavaScript and CSS minification. Though MiniBlog is very new, it uses the old but extremely reliable CookComputing.XmlRpc for the MetaWeblog service communication with Windows Live Writer. I

No Database Need

I like apps that can avoid using databases. Sometimes the file system is a fine database. I thought this when we worked on DasBlog, Mads thought it when he made BlogEngine.NET (his original blog engine) and that "no database needed" design tenet continues with MiniBlog. It stores its files in XML, but MiniBlog could just as easily use JSON.

Clean Content-Editable Design Service

I always (exclusively) use Windows Live Writer for my blog posts. WLW is also the preferred way to write posts with MiniBlog. However, if you insist, MiniBlog also has a really nice content-editable scheme with a great toolbar, all in the browser:

Nice Editing Experience

When you are viewing a post while logged in as Admin, you click Edit and turn the page into editable content.

editPost = function () {

txtTitle.attr('contentEditable', true);
txtContent.wysiwyg({ hotKeys: {}, activeToolbarClass: "active" });
txtContent.css({ minHeight: "400px" });
txtContent.focus();

btnNew.attr("disabled", true);
btnEdit.attr("disabled", true);
btnSave.removeAttr("disabled");
btnCancel.removeAttr("disabled");
chkPublish.removeAttr("disabled");

showCategoriesForEditing();

toggleSourceView();

$("#tools").fadeIn().css("display", "inline-block");
}

The resulting HTML you write (in a WYSIWYG mode) is converted into XHTML and posted back to MiniBlog:

parsedDOM = ConvertMarkupToValidXhtml(txtContent.html());


$.post("/post.ashx?mode=save", {
id: postId,
isPublished: chkPublish[0].checked,
title: txtTitle.text().trim(),
content: parsedDOM,
categories: getPostCategories(),
})

The JavaScript is surprisingly simple, and gets one thinking about adding basic editing and CMS functions to websites. A design mode would be a daunting task 5 years ago, and with today's JavaScript it's almost trivial.

It even automatically optimizes images you drag and drop into the design surface and upload.

public static string SaveFileToDisk(byte[] bytes, string extension)

{
string relative = "~/posts/files/" + Guid.NewGuid() + "." + extension.Trim('.');
string file = HostingEnvironment.MapPath(relative);

File.WriteAllBytes(file, bytes);

var cruncher = new ImageCruncher.Cruncher();
cruncher.CrunchImages(file);

return VirtualPathUtility.ToAbsolute(relative);
}

The code is fun to read, and you can go check it out at https://github.com/madskristensen/MiniBlog. It supports HTML5 microdata, sitemaps, both RSS and Atom, simple theming, and gets a 100/100 of Google Page Speed.


Sponsor: Big thanks to Red Gate for sponsoring the feed this week. 24% of database devs don’t use source control. Do you? Database source control is now standard. SQL Source Control is an easy way to start - it links your database to any source control system. Try it free!



© 2014 Scott Hanselman. All rights reserved.
     
14 Mar 02:28

The World’s Greatest Azure Demo

by Troy Hunt

I had an opportunity recently, an opportunity to give a really impactful demonstration of Windows Azure to people who had not yet drunk from the Microsoft cloud fountain of love. These were people from the “old world” where men were men and infrastructure wasn’t a service, it was cold, hard metal that cost a ton and stuck with you until the damn thing was puffing out smoke.

But these were also people that were attracted to the promise of the “new world”; the on-demand, auto-scaling, commoditised, we-can’t-quite-solve-world-hunger-yet-but-we’ll-give-it-a-damn-good-go promise that’s in all the PowerPoint slides they see. Now I’ve seen some very good PPT decks before, but nothing speaks to people like a working product.

There are a lot of magnificent features in Azure and all sorts of services that can pull off some pretty impressive stunts that get the geek-brigade wetting their pants with excitement, but the people you need to really convince of the awesome are rarely the ones marvelling at the process isolation of the idempotent polymorphic shape-shifter widgets (I hope the sentiment is conveyed by the ludicracy of this statement). No, the people who need convincing are the ones who want to know things like how much sooner it allows them to deliver working software to their customers, what options it will give them to cut costs and importantly in the context of this demo, they want a good high-level view of how the damn thing actually works. This is the “pitch” of this demo – those are the guys I’m trying to reach.

I set a lofty goal for this – “The world’s greatest Azure demo” – and in the context of what I’ve just described about the target demographic, I reckon it’s come out pretty good. Set a high goal then jump like mad.

THE WORLD'S GREATEST AZURE DEMO - All the awesome wrapped up into a one hour superdemo

I’m going to cover 14 discrete topics all stitched up into one superdemo. The plan was to take about an hour per the title in the website you see above (this is a real live website I setup in the demo and push out to worldsgreatestazuredemo.com by the way), but I got, uh, a bit carried away. Only by another 22 minutes, but sometimes there’s just a story that wants to get out and it’s hard to hold it in.

Here’s where all the effort goes:

  1. Getting started with a VM
  2. Getting started with a web site
  3. Deploying from Visual Studio
  4. Binding a domain
  5. Deploying from source control
  6. Managing the VM
  7. Scaling the VM
  8. Monitoring
  9. Creating an availability set
  10. Adding disks to the machines
  11. Auto scaling the website
  12. Auto provisioning with the Windows Azure Management Libraries
  13. Getting started with SQL Azure
  14. Backing up SQL Azure

So onto the demo – enjoy and share generously!

Note: I misquoted a couple of things when talking about availability, see this comment below for clarification.

Want more awesome?

Windows Azure Friday: It's a Friday, it's Hanselman, it's more Azure goodness to wrap up each week with

Pluralsight: Hardcore developer and IT video training by the pros with heaps of Azure material

Azure Support Forum: Learn by reading the challenges others are facing — no really, read these, you'll learn heaps

Stack Overflow Azure tag: Same deal — read these, learn heaps

Scott Gu's blog: It's all Azure lately and it's all good — get the good word on what's new from the man himself

02 Mar 10:28

10 Years of Coding Horror

by Jeff Atwood

In 2007, I was offered $120,000 to buy this blog outright.

I was sorely tempted, because that's a lot of money. I had to think about it for a week. Ultimately I decided that my blog was an integral part of who I was, and who I eventually might become. How can you sell yourself, even for $120k?

I sometimes imagine how different my life would have been if I had taken that offer. Would Stack Overflow exist? Would Discourse? It's easy to look back now and say I made the right decision, but it was far less clear at the time.

One of my philosophies is to always pick the choice that scares you a little. The status quo, the path of least resistance, the everyday routine — that stuff is easy. Anyone can do that. But the right decisions, the decisions that challenge you, the ones that push you to evolve and grow and learn, are always a little scary.

I'm thinking about all this because this month marks the 10 year anniversary of Coding Horror. I am officially old school. I've been blogging for a full decade now. Just after the "wardrobe malfunction" Janet Jackson had on stage at Super Bowl XXXVIII in February 2004, I began with a reading list and a new year's resolution to write one blog entry every weekday. I was even able to keep that pace up for a few years!

Janet Jackson clothing malfunction

The ten year mark is a time for change. As of today, I'm pleased to announce that Coding Horror is now proudly hosted on the Ghost blog platform. I've been a blog minimalist from the start, and finding a truly open source platform which reflects that minimalism and focus is incredibly refreshing. Along with the new design, you may also notice that comments are no longer present. Don't worry. I love comments. They'll all be back. This is only a temporary state, as there's another notable open source project I want to begin supporting here.

It is odd to meet developers that tell me they "grew up" with Coding Horror. But I guess that's what happens when you keep at something for long enough, given a modest amount of talent and sufficient resolve. You become recognized. Maybe even influential. Now, after 10 years, I am finally an overnight success. And: old.

So, yeah, it's fair to say that blogging quite literally changed my life. But I also found that as the audience grew, I felt more pressure to write deeply about topics that are truly worthy of everyone's time, your time, rather than frittering it away on talking head opinions on this week's news. So I wrote less. And when things got extra busy at Stack Exchange, and now at Discourse, I didn't write at all.

I used to tell people who asked me for advice about blogging that if they couldn't think about one interesting thing to write about every week, they weren't trying hard enough. The world is full of so many amazing things and incredible people. As Albert Einstein once said, there are two ways to live your life. One is as though nothing is a miracle. The other is as though everything is a miracle.

Watchmen page

I wasn't trying hard enough. I had forgotten. I can't fully process all the things that are happening to me until I write about them. I have to be able to tell the story to understand it myself. My happiness only becomes real when I share it with all of you.

This is the philosophy that underlies Stack Overflow. This is the philosophy that underlies Discourse. These are all projects based on large scale, communal shared happiness. Love of learning. Love of teamwork. Love of community.

Love.

For the next decade of Coding Horror, I resolve to remember how miraculous that is.

[advertisement] How are you showing off your awesome? Create a Stack Overflow Careers profile and show off all of your hard work from Stack Overflow, Github, and virtually every other coding site. Who knows, you might even get recruited for a great new position!
07 Feb 04:45

Q&A: Tom Francis, creator of Gunpoint, Heat Signature, The Grappling Hook Game!

by Staff

Tom Francis is the creator of a beautiful stealth puzzle game called Gunpoint. But there's more. He writes compelling short stories. He is on an amazing weekly gaming podcast. And he continues making games, including the new Heat Signature above. Now he answers these questions!

1. How long have you been making games?

Gunpoint is my first, and I started it around April 2010, just as a hobby project. I'd dabbled with level design and some very basic mod-making before that, but Gunpoint was the first time I really set out to make a game of my own.

2. Where do you find ideas for your games? Tell us something about your creative process.

The kind of game ideas that excite me usually start with something that already exists: maybe one aspect of a game that's already out there, one moment in a film that I'd like to experience for myself, or part of a story that I think could be turned into systems. And they only excite me as a game idea if I can see, in my mind, how all the systems would work to simulate this amazing ability, or interesting situation, or creative playground. So the core idea is usually something quite simple, that could be applied in lots of different situations, and would produce interestingly different results depending on how you played with it. The Crosslink is a simple idea - link anything to anything; which applies in lots of situations - any building with electronic security; and has different results depending on how you play - trapping guards, tricking them, knocking them out, setting up elaborate self-perpetuating machines, etc.

3. Gunpoint is a “2D stealth game about rewiring things and punching people”. Where does this rewiring idea come from?

I already knew I wanted to make a game about infiltrating offices, so I was trying to think of a hacking system. I wanted something that would let you make bits of the environment work for you, so I just tried to think of the simplest, most universal way I could do that. I thought back to how I wired up buttons to doors when designing levels for other games, and thought it might be fun to give the player the ability to mess with that themselves.

4. As a player, there is a lot of creative freedom in Gunpoint (mostly because of its rewiring mechanics). Do you remember any crazy stuff someone has pulled off to solve a puzzle?

I watched one player die again and again, because they were luring a guard over to open a door and getting shot by him as soon as he did. They kept trying to pounce on the guard the moment the door opened, but couldn't get the timing right. There were dozens of simpler ways to do it - not least just hiding until he turns around - but they clearly had this idea in their head and were determined to pull it off. Finally they got it just right - the second the guard opened the door to investigate a noise, the player came flying through it fists first and pinned him to the ground before he could react. It was kind of brilliant once they managed it. They punched him a *lot* of times.

5. Let's talk about prototypes! Heat Signature and The Grappling Hook Game... Which one are you more excited about and why? (Note: randomly generated spaceships are awesome!)

Tough question! I'm very excited about both, but in different ways. I think if I make them both perfectly, the grappling hook game will probably be more immediately fun to play, more satisfying in a tactile way. And if I made it perfectly, it would also be multiplayer, which adds a different kind of fun. But Heat Signature is richer in terms of systems and big ideas. And the biggest difference between them is that the grappling hook game is in Unity, which is new to me, so progress has been much slower. Game Maker is already an easier tool to work with, and I have three years of experience with it, so the rate at which I can add stuff to Heat Signature is just exhilarating. Right this moment, for that reason, Heat Signature's probably more exciting.

6. You are also a writer (we read your short stories in the Machine of Death collections!). Are you planning to write more? What was the last book you read? What authors do you like?

Good to hear! I'm always interested in writing more short stories and short scripts, but I tend to do it when an opportunity comes along - something that justifies taking a little the time away from game development. I wrote a very short script for a heist movie when I came across some concept art for one that hadn't been used. Right now I'm reading The Panda Theory by French noir writer Pascal Garnier. It's intriguing but I haven't got to the meat of it yet. My all-time favourite is probably Douglas Adams - Hitch Hiker's and Dirk Gently are what got me interested in writing in the first place. Wry, absurdist humour running through everything really works for me.

7. You are on a weekly gaming podcast called The Crate & Crowbar (we've just subscribed!) Any favourite episode of the show you remember? And what about the weirdest?

I think the funniest single thing I can remember is Graham comparing Spelunky's deaths to Ernest Hemingway's one-sentence story, in the episode: for sale: spring boots, never worn.

8. If you have to choose three and only three game developers to follow their work closely, which ones would you choose and why?

Tricky, because there are so many amazing developers but so few with a really long track record of consistently awesome stuff. Introversion spring to mind right away. Valve, naturally. And then... I'd love to pick another indie, but the truth is, if Bethesda make a new Elder Scrolls game and I can't have it, my life is over. So Bethesda.

9. Are you a heavy gamer? What games are you playing now?

I am, although the way I play has changed since I started making games. Now that I know how much I can achieve in my free time, I have a very low tolerance for games that waste my time, or require hours of investment before they get good. Relevantly, the game I'm playing most at the moment is Assassin's Creed IV: Black Flag. Most AC games have so much scripted bullshit at the start that I can just never get through it to the open-world stuff. Black Flag still has more than I'd like, but it's less than the previous games, and the open world itself is twice as exciting. I love sneaking around plantations without setting off alarms - it's almost as good as the outposts in Far Cry 3.

10. One last random question (ok, not that random this time!). If you could turn any human activity into a stealth game, which one would it be and why? I think turning human activities into stealth games is my profession now, but if I have to pick something more everyday than heists and spaceships... avoiding someone you vaguely know but don't really want to say hi to. That's the one time I'd really like to have a visibility meter, and some AI barks to let me know if I've been spotted yet.

 

*We Ask Indies is an initiative by Beavl, an Argentinian independent game studio putting some teeth into videogames. You can check all the interviews here (caricatures are made by amazing artist Joaquín Aldeguer!).

[Nico Saraintaris wrote this using sister site Gamasutra's free blogs]

20 Jan 09:56

3 reasons we should stop using navigation bars

by Kendra Gaines

thumbnailIf you’re anything like me, you spend a lot of time studying other designers’ work. I like to look at projects for the experience and the interactions created for the users.

Obviously, as more techniques come about, the changes in web design take place and newer, better things arrive. We’ve experienced the life of the splash page, the introduction header, parallax scrolling and so many other things that have affected the web experience. However, those things were mainly aesthetic and didn’t really change the way we create websites.

Lately, I’ve been thumbing through some websites and have seen a new change. One I think I like, but am not sure. A change that I could see really reinventing the way we even think about designing websites. It would cause us to be smarter and think more intuitively about our audience. And that couldn’t be a bad thing. This technique is something that’s not unique to the world of responsive and mobile design. However, for some tablets and desktops, it’s a new variety of navigation.

We aren’t getting rid of menus all together, we are just hiding them until they are called for. Could this be something that takes off?

 

How important are navigation bars?

The navigation bar was born right along with the Internet. Designers believe that placing all the menu content in clear view on a page just makes sense. And it’s hard to argue. If you come to a website for the first time, you want to know what’s available and where to go. It seems to have cemented itself as an important part of web design. Wireframing toolkits and programs include navigation bars, just like they include dummy text and buttons.

Navigation bars are presented in many different ways. Lately, sticky navbars have become very popular. Unlike the proposed effort, this nav bar is always present on the page. However, sticky bars are usually used in sites with heavy parallax scrolling (another huge trend). This can end up being a bit distracting, especially when it takes up a horizontal area at the top of a page.

It’s hard to argue the effectiveness of navigation bars. As a matter of fact, I won’t. They are effective and are the norm in web design right now. But, is there a better way to present our menus that could possibly change the entire way we think about web design? I believe so, and this way to change web design is to get rid of the navigation bar all together. But why?

 

3 reasons to stop using navigation bars

1. Fewer distractions

This is something I’ve touched on previously, but with the absence of navigation bars, there’s obviously fewer distractions. Navigation bars have become a place to store all the content you can’t fit on your website. On top of that, we put every single page we’ve imagined and come up with on the navigation bar. Some are junky and cluttered. Some have telephone numbers and search boxes. Some are just big and only have three small links on them. Some have drop-down menus that span the entire height of a website. What’s the point?

In the past few years we’ve come to notice that web design was becoming a little too cluttered, thus the resurgence of the ever-popular minimalist design. But instead of really fixing the problem of clutter, we’ve just stripped our web designs of the exciting stuff. In addition, the focus on the menu and the sitemap have really cost us the most important parts of the website. Immediately when we start designing, we are taught to think of the sitemap and how everything is going to connect. Imagine if we spent that time thinking about what the audience wants and how they’re going to use it. 

2. Customer Focus

At one point, I posed the question of whether or not flat design has made our web sites too simple. I’ve also asked other community members if they think minimalism is killing our creativity. I’ll spare you the lengthy read and summarize by saying this: we’ve traded in spectacular design for subtle web experiences. What do I mean? We’d rather have a simple blog with a white background, as long as the posts auto-scroll. We’d rather use a monotone or two-tone color scheme and make the highlight color something totally expected. Because we think that’s cool.

Now, I must admit that we must be weary of over-designing. It’s something I don’t recommend at all. But it seems like we just stopped designing all together. And the things we find to be good design are really only things other designers can notice and enjoy. It took me about 5 years to learn the lesson that what a designer may think will look good isn’t always what the customer thinks looks good.

In order to be successful with this, we have to focus on the customer/audience like never before. We have to try to figure out exactly what they want to see and how they want to see it. Navigation bars have kind of been like a guided process before, but since they’re the norm we’re just slapping it on a site as one-size-fits-all. The focus on the customer creates a greater connection with them and lends itself to experience driven designs like never seen before.

3. Experience driven designs

Let’s build a bridge. This bridge connects what we want them to see along with how we want them to see it. The length of the bridge varies depending on how far away the two are from each other, but there must be a bridge nonetheless. We obviously want to have the smoothest bridge possible so the transfer of information can be as smooth as possible. By ridding ourselves of the navigation bar, we’ve created a platform to have a fully immersive brand design that should cater directly to the customer.

This allows us to now create experiences. Yes, we’ll probably have to get away from the world of strict minimalism. However, this gets web design back to what it should be; a space on the web dedicated to the relationship between a brand and its customer. These experiences should make visitors more away of the brand while also creating an interesting way to do so. Rather than just clicking a link and being taken to a whole new page, now there’s an opportunity to really create something. There’s an opportunity to take all the cool new advances in HTML and CSS3 (aside from just scrolling) and create something magical and mindblowing.

 

Conclusion

Without that pesky bar at the top of our pages, it really frees up a whole new world of thought. I’m sure you’re thinking, well if you move and it’s hidden, then there’s really no difference. But we are essentially taking away the very thing that moves viewers from page to page. How does one design a website like that? How does one manoeuver around a website like that? It seems impossible and as if removing a bar couldn’t have such a large impact, but I beg to differ. You can check any scrolling site that makes no large use of navigation.

Is this the next thing in the world of web design? Can you imagine going to a website that has no visible menu, but knowing where you want to go? It seems like a mighty interesting challenge; one many will take. Of course, the first problem would be for sites that are heavy on pages: Does your flyout menu contain tons of links or do you just learn to condense all the content? No navigation bars could really change the way of web design, but only the future can tell if this will be a new trend.

 

Have you built a site without a traditional navigation bar? Do you think navigation bars are essential in website design? Let us know in the comments.

Featured image/thumbnail, navigation image via Shutterstock.



Bundle of One-Page Parallax Bootstrap 3.0 Templates – only $17!
3 reasons we should stop using navigation bars


Source
    






15 Jan 22:09

The Biggest Threat to Your Bootstrapped Business

by Mike Taber

When you’re building a new business, there are a lot of things that can go wrong at any given point in time. Only a small subset of these problems have the potential to completely destroy your business, bringing it to a total halt. Most of these are obvious.

  • Problems with cofounders
  • Running out of cash
  • Natural disasters/Loss of data
  • Death of a founder

Disagreements among cofounders is one of the most common causes of business failure. If you’re reading this article, chances are good that you’re a solo founder. *phew* Dodged a bullet there, didn’t ya!

Running out of cash is also a common problem, but it’s not often one that comes as a surprise. If a massive, unforeseen expense comes up, then it can be really hard to deal with. If the business is sound and the expenses are “small enough”, you can sometimes float the business on credit cards until you come through to the other side but that’s not always an option. Cash flow problems can usually be seen months in advance. There might not be anything you can do about it, but at least you see it coming.

Natural disasters or data loss of any kind really sucks because you have little or no warning. Maybe a water main breaks and floods your office or your hard drive crashes Maybe hackers broke into your server and deleted all of your customer records. Hackers trashing your server isn’t as common as a hard drive crashe, but it still happens often enough to be a concern. ( *Blatant pitch*: I have a security product called AuditShark that helps detect whether hackers are messing with your servers. Only if you’re into that kind of thing of course.) If your server dies and you thought you had a backup but you never tested the restore, you could go out of business in a hurry.

With the exception of dying, these are all known problems with solutions that you can apply. It’s hard to solve dying, but I hear Google is working on that one too. But hang on a second. There’s one problem that’s more devious than all of the rest for two reasons. First, you almost never notice until it’s too late. And second, it’s incredibly common in single founder businesses.

You Might Be Treading Water

You’re probably familiar with the term “sink or swim”. It’s the idea that when you are thrown into the thick of a problem, either you’re going to drown in the depths of that problem, or make your way out of it. Those are reasonable assumptions, but it’s an overly simplistic and inaccurate representation of the situation because there’s a third possibility.

The worst outcome is that you might simply tread water.

Sometimes that’s enough. But in a bootstrapped business, there are lots of different aspects of the business that are constantly vying for your attention. It could be anything from engineering, customer support, bug fixes, new ordering systems, scaling your marketing, hiring, building new features, responding to customer inquiries, sales demos, etc. At some point, you’re going to find yourself overloaded and instead of doing anything to move the business forward, you’re simply treading water.

Business Overhead

As your business becomes successful, you end up doing a lot of work that is what I like to call “business overhead”. It’s the work that needs to be done to keep the business running, but none of it moves the business forward. These include things like:

  • Any type of government paperwork (Please people, let it stop!)
  • HR functions, such as payroll, retirement plans, taxes, health or dental plans, etc. (Show me the money!)
  • Paying bills, accepting payments, dealing with taxes, or other financial transactions not directly related to making sales. (No, just the inbound money!)
  • Desktop, server or infrastructure administration. (A defrag here, a software update over there.)
  • Purchasing hardware, software, office equipment, etc. (New iPad. WOOT!)
  • Employee or contractor management (I have minions!)

These things tend to be necessary to keep the business running. If you don’t do them, the business can grind to a spectacular halt. But none of these things move the business forward in any way. It’s busy work. Business overhead. You’re doing nothing more than treading water.

But it feels like you’re working on the business. Let me be clear about something here. You’re not.

The work is necessary. But it’s not important.

Let me clarify that, lest you misunderstand. It’s hard to dispute that some jobs are important for you to do, and others are not. Paying bills is necessary. Do you have to be the one to do it?

Of course not. Unfortunately, this kind of work can really creep up on you. Over time, there’s more and more of this business overhead that starts getting in the way of moving your business forward. As it creeps up, you will have a tendency to spend more and more time working in your business rather than on your business.

It’s a subtle distinction, but it’s an important one. If your business can’t live without you, then you haven’t created a business. You’ve created a job. And that’s probably not what you want.

The biggest problem is that many people don’t realize that they’ve done this to themselves. They keep plodding forward, thinking that they’re making progress and they’re not. Anyone who’s ever been employed full-time at a regular company will recognize the difference between showing up and moving the business forward. Between treading water and doing things that matter.

Step 1: Admit you have a problem

If you realize that you’ve been treading water, that’s the first step in solving the problem. Because unless you recognize that it is a problem, there’s no way out and you will tread water until you drown. But once you recognize it, you can take action to turn things around.

Here’s the process for turning things around. First, figure out where you’re spending your time. Personally, I’m a big fan of identifying the things that I procrastinate until the last minute to do. For example, I hate, hate HATE paying bills. It’s not that I don’t have the money, or that I have a bizarre aversion to seeing money fly out of my checking account. Actually I do, but that’s not the real problem.

My problem is that I have three businesses and I get a LOT of paperwork in the mail. There’s a separate set of financials for each of the three businesses, plus I have all of my personal finances to manage. The paperwork alone takes at least four hours every month and sometimes as much as 10-15. If I procrastinate on reconciling my checkbooks for too long and make a mistake somewhere, it can take quite a while to find.

It’s also mentally draining to switch back and forth between the different businesses. Is this receipt for my consulting company or my software company? Can I write off this expense or no? Do I get reimbursed for this expense on my personal card and if so, which account should reimburse it? One month, I had a $54,000 credit card bill from American Express. If you have one, you know those need to be paid off at the end of the month. And while I had the money, it’s still depressing to see $54,000 come out of your bank account all at once in addition to all of your other expenses.

Step 2: Outsource like it’s your job… because it is!

So now that you’ve identified the problem areas, the solution is to outsource them. Find someone who’s qualified to tread water for you and get those jobs done. Your key to outsourcing those types of tasks is to thoroughly document the process. I like to use Google Docs for this, since I can embed screenshots, add videos via my Wistia account, add detailed instructions, and share the instructions across multiple contractors. Another nice advantage is that Google keeps a detailed revision history so if you ever need to go back in time, you can.

Your goal is to document it in such a way that anyone off the street can do the work, but clearly there are certain types of work that you need to have some domain knowledge to perform. Book keeping is one example. Tax preparation, reviewing legal documents, doing payroll, IT administration, etc. These all fall under that umbrella and it should come as no surprise that this umbrella tends to also cover nearly everything that I previously listed as things that fall under the heading of “business overhead”.

Once you’ve documented the process, identify someone you can hand it off to. Hiring someone is a little bit trickier. Fortunately, there are a lot of great resources for hiring contractors. Can’t find any good ones? Here’s three from my podcast:

Step 3: Resist the Urge to Do It Yourself

This one is HARD. The problem is that you are already doing these tasks and have been doing them for a long time. You’re pretty good at them and you’re teaching someone else how to do them now. It’s easy for you because you’ve done it a hundred times. This new person hasn’t ever done them before so it’s natural to want to jump back in and take over when they inevitably make a couple of mistakes.

When you hand these tasks off, you need to set aside your expectations for two things: 1) Quality and 2) Response Time.

When I say that you set aside your expectations for quality, what I really mean is that you should recognize that this task is not likely to ever be done as well as you would do it. That’s ok. You’re the business owner. You’re supposed to care about the business more than they do. Mistakes will happen and there will be a learning curve. But given enough time, the person will learn what to do without your oversight. Eventually, it frees up your time and lets you swim, rather than tread water.

There are a few tips I’ve learned over the years here that I want to share:

  1. Make absolutely sure that you provide the person with not just the process, but the direction and ability to add comments, notes, and the authority to MODIFY the process. That’s right. You’re handing this off to someone else to make the decisions. They will need to make updates. So if you’re using Google Docs to manage the documentation, then make sure this person has read/write privileges.
  2. Stop worrying about it. You’re paying someone else to worry about it for you. I haven’t really said it until now but part of what you’re doing isn’t just freeing up your time. You’re freeing up space in your brain so that you don’t have to pay attention to this task anymore, or even remember that it needs to get done.
  3. Give the person a chance to learn, but have the guts to pull the plug if it’s not working out. I find that it takes at least 3-4 passes through a task for someone to get it right. Back in high school, I worked at a store where I trained for a few weeks and then was on my own. The next day that I came into work, the owner would take me aside and tell me all of the things that I either forgot to do, or didn’t do correctly. I hated that part. But I learned quickly that if I didn’t screw up, we didn’t have that “chat” and I was ecstatic the day he said “Good job, Mike”. I still made mistakes here and there. Speaking of which:
  4. Mistakes are going to happen. When we make mistakes ourselves, we don’t care so much about it. In fact, we simply correct it. There’s nobody sitting over our shoulders telling us the things we did wrong. In fact, quite the opposite. Your own employees and contractors will almost never point out that you screwed up! When someone makes a mistake, simply correct it and move on… EVEN IF IT COST YOU MONEY! Now if they ripped you off, then obviously you need to fire them and maybe get the authorities involved. But everyone makes mistakes. It’s not a big deal, nor is it generally a reason to take back a task that you’ve outsourced to someone else if they’re doing it in a reasonably competent manner.
  5. Don’t dive into outsourcing head first. If this is your first time outsourcing, then you should only outsource one task. It will seem like a waste of time, but I promise you it’s not. It’s an investment in learning how to effectively outsource and delegate tasks in your business.

Step 4: Review Progress For Several Iterations

A big mistake that I see a lot of people making when outsourcing is that they expect things to be perfect the first time. That’s not going to happen. You’re building a new process and putting someone else in charge of it. They need to learn how to do it and you need to provide guidance. They can’t do it alone. Expecting perfection out of the gate is a surefire way to disaster. You’ll end up firing the contractor and doing the work yourself, which is exactly the opposite of what you should be doing.

It should take anywhere from 4-6 iterations through the task, followed by a review of the work and corrections from you for the person to get things right the first time. If the process is complicated, it might take longer. If it’s simple, it might take less, but you get what you pay for so be careful.

Step 5: Walk Away

Once the contractor has started becoming comfortable with the process and the number of corrections after each iteration is almost zero, you can remove yourself from the process. If you’re still doing the approvals, stop doing them and have the contractor do it. Or better yet, get another contractor to issue approvals and have them work together.

By the time you’re done, hopefully you’ve been able to build a series of good systems for your business, some of which are fed from one contractor to another, and all of which free up your time to work on the business, rather than in the business. And you know what that means?

No more treading water.

Have some outsourcing tips of your own to share? Add them to the comments. I’d love to hear them.

14 Jan 22:42

The profound effect Steam can have on your success, shares Vertigo Gaming

by Staff

[by David Galindo]

One year after its release, it feels like I’m barely starting to launch my game.

Cook, Serve, Delicious!” was released on October 2012 to PC, and has since moved on to several different platforms and distribution services. I wrote an article months ago outlining the first few months of sales, and today I’d like to wrap it all up in this one year look of sales data, strategies for selling my game, platform performance, and huge opportunities that shot the game higher than I ever imagined it going. It’s been an insane ride, and I hope these series of articles can help indie devs out there with their own strategies and game launches. So let’s get to it!

Predicting Sales and the Mobile Market

When I wrote up my last sales article on Cook, Serve, Delicious, I had many predictions. First, I was planning on releasing the game to Android and iPhone in April, and have $3,500 in sales monthly on mobile alone, with my next game releasing in January.

Turns out all of that was completely wrong.

Firstly, the Android port took longer than expected, resulting in a July release. Secondly I couldn’t get the game to run properly on iPhone 3 or 4 despite weeks of work, so I had to cancel that version outright as I couldn't cut enough of the game's assets down to fit the 256mb of memory before having to cut gameplay components. And finally, my monthly mobile sales ended up averaging ~$1,500 a month, going lower and lower through the year, even with the added Android revenue in July. September hit rock bottom, with barely $1,000 in sales (a third coming from Android).

But let’s back up a bit. Android, while not the most lucrative market, has been quite a pleasant experience. Sales on Android from July through October 1st have netted $2,223 in actual revenue (after Google’s cut), with the iPad version bringing in $15,013 from Jan. thru Oct. Keep in mind the iPad sales account for $5,013 in January alone (the game was still new and on sale), as well as six extra months of sales vs. the Android version.

That Android number looks substantially lower than the iPad version, yet I’d port to Android again in a heartbeat. The developer tools are so much better on that platform vs. Apple- I’m able to release new updates in just several hours vs. the days in review that Apple takes to review the app, the UI for adding achievements and viewing data is so much more developer friendly than Android, and I’m able to respond in the reviews section to dissatisfied customers instead of watching helplessly as a user states problems in the iPad reviews section without any way for me to help them. The money isn’t quite there, and as a platform I have to acknowledge that iOS will always take priority over Android since it was my main source of revenue, but I hope that disparity of revenue closes over time. I’d love to keep developing for that platform.

Android based retail platforms, however, are a different story. Eager to experiment, I tried putting the game on the Samsung marketplace, Amazon App Store, and several free-to-play experimental sites like Moriboo, all for Android. While the contracts for those retail stores prevent me from sharing hard data, I will say that in terms of sales up till today it was barely worth the day it took to port to these platforms.

The iPad has been my main source of income through October, though I haven’t done too much to really push that market. Major updates have already been posted for the game, and no advertising was done (for any version of the game, actually). Temporary price cuts in April ($2.99 for a week, down from $4.99) and July ($3.99) resulted in a small bump, but nothing substantial. Aside from the terrific $5,067 in sales in January, the year has been pretty small as a whole (the lowest being $674 in September). It was a far cry from my expected >$3k in sales a month. But then again, I did little to really push the game in any major way like I did with the DLC updates in January. You get back what you put in, most of the time.

Desktop Performance

Over on the PC/Mac/Linux side, things were similarly slow. Sales from Jan. through October were around $3,600 total just from my website alone (other websites that sold the game contributed an additional $600 for the same timeframe). In June, I was offered the opportunity to be in the Humble Store widget, a great platform for indie devs to sell their game through a trusted merchant (this was not the stand alone Humble Store that’s available now, but rather the widget devs can use on their own websites).

While the game didn’t benefit from any Humble exposure at the time since the HS didn’t exist yet, it showed that by simply using their service, it would help increase revenue. Sales were double than what they were using BMT Micro, and while admittedly that’s not a whole lot, it’s pretty interesting when there’s virtually no change in awareness for the game. I dropped BMT Micro from handling CSD transactions and the Humble widget added an additional $2,100 from June through Oct. I hope to continue using the Humble Store services for all my future games.

Sales through October

Total sales from January through October resulted in $22,624 from mobile and desktop, which means since its release on October 2012 I’ve made about $34k in revenue from the game. Spending around $9-10k total on the game, that’s a $24k profit, which sounds pretty good on paper... however, I had already quit my job in December to focus on games full time, which meant this was my only source of revenue (my other games contributed less than $2k total in the same timeframe). What this meant was my next game would have a small budget as well, and would once again be the make-or-break game for me… if my next project wasn’t a breakout hit, I’d have to go back to work on something other than game making. 

And then something happened that would pretty much change my life forever.
 

Greenlight

One morning I woke up to an email: my game, Cook, Serve, Delicious! was Greenlit. I was angry. This was a fairly legit looking email, but it had to be spam or some sort of scam, because I wasn’t even close to the top 100 for Greenlight (up to that point, Steam was only accepting 8-10 games per month or so). But I went to the Greenlight page anyways to see what could be going on…and there it was. The acceptance banner that the game was indeed Greenlit, along with nearly a hundred other games.

I was floored. I couldn’t believe it. It was truly like winning the lottery. I was so excited I was shaking uncontrollably. Holy crap. I’m on Steam. I’M ON STEAM! After being rejected three times with the Oil Blue by Steam, here was my next game, nearly a year later, being approved by Steam.

Once I calmed down, I went into strategy mode. I realized that I would be competing with these other hundred games that were approved alongside me. I quickly ran through all the games and determined which ones were finished and ready to distribute like mine was, and which were still in early concept/alpha phase. Some games were finished, but the devs didn’t acknowledge their Greenlight acceptance even days after it was awarded to them, so I figured those games wouldn’t be on Steam very soon. Thankfully, the engine I was using (Game Maker Studio) was fully equipped with Steam APIs for Achievements, Cloud and Leaderboard support, so I read up and started working like crazy to get my game set up for Steam.

Other games launched ahead of mine, and I took note of their release strategies. One dev refused to give Steam keys to people who got the game via a bundle which lead to a lot of backlash in the Steam forums, so I immediately offered keys on Desura, my website and Humble Store for current owners. Other devs released their game with no extra Steam support, and some Steam community members were a bit put off. They had beat me to release, but at the cost of not fully embracing the Steam platform. I had one chance to launch on Steam, and damn if I was gonna just throw my game up there without making it the best I possibly could.

The game was priced originally at $8.95, however Steam recommended a $9.99 pricepoint to better fit with the large selection of games. With that price raise I decided that not only would I be supporting Steam features, but would also add things that I felt should have been there at launch such as Key Binding support. I was worried about the potential backlash of the $1 increase, but I have yet to hear a single complaint. I think the fact that I fully backed my game up with new content and Steam APIs helped quite a bit.

The Steam Launch

On October 8th, 2013, just a few days shy of one year since CSD officially launched, the game landed on Steam. And in just one day, I had made nearly $15,000 in gross sales, which was almost as much as I made in the entire last year on PC/Mac/Linux for CSD.  In two days, I surpassed it. In one week, I had made over $50,000 in gross revenue… more than I did in the last three years as a game maker and barista. My family couldn’t believe it. I couldn’t believe it. Finally, I felt redeemed. This was a path that was more than a dozen years in the making, but I had finally made it.

But man, Steam was just getting started. I participated in the November Steam sale and had triple the revenue of my daily income even with the game at 50% off. The December Steam sale came and saw similar sales.

I woke up one morning in December to a mess of Twitter replies. My game, at 4am that morning, had been chosen for a Flash Sale during the Steam Winter Sale at 75% off. I was a mess of emotions: I had no idea that was gonna happen, and was I really ready to have my game sell that low? (I had put in the 75% offer to Steam, so I knew it was possible, but you don’t know if you’re picked). Would my sales crash afterwards, with people not wanting to buy the game anymore after it was so cheap?

I frantically looked at my sales data. Only a few hundred dollars it seemed, hmmm, maybe people just aren’t that interested in the game. Then I realized Steam froze all data at 4am the moment it went on sale. I had no idea how it was doing.

I was excited, until I realized I had told some people earlier on Steam and Twitter that they should buy the game at 50% off since CSD wasn’t chosen for a Flash Sale. I felt terrible, and quickly posted in the forums that it was completely my fault, and I would be more than happy to buy a game for them on Steam should they feel that they were mislead, as they certainly were, albeit by accident. That seemed to calm some folks down, but surprisingly no one took me up on it. That offer still stands, by the way.

The sale ended at 12 noon. I checked the data…$6,000 sold, a nearly 6x increase from yesterday. That’s pretty darn good! Wait…that’s only the data from 4 to 5am. As I kept refreshing, the sales kept rolling in. Thankfully I had to leave the house with visiting family to do some stuff, which kept me from frantically clicking the refresh button.

We stopped by Schlotzsky’s later that day for lunch, which was great since I don’t own a smartphone, had my iPod with me, and they have Wi-Fi. My sister and I stood in line as I quickly connected online and got to the sales page.

Over $50,000 gross was made in just under eight hours. Over 20,000 copies sold in that timeframe. 20,000! My last game, the Oil Blue, has yet to break 1,000 copies sold in the last three years since release.

“I’m buying lunch.”

After Effects

I wasn’t sure how the game would sell after the Flash sale. My thoughts were it would go down dramatically since it was so cheap for those few hours, but as it turned out they actually doubled in daily revenue since the flash sale up until the December sale was over.

I wasn’t sure how sales would go after the big holiday extravaganza, but as it turns out after a week or so sales went back to my normal revenue I was making before the sale. Insane.

Not only that, but sales on other platforms increased during the Steam launch, with Android bringing in over $3,000 since October 2013 and iOS bringing in over $5,000 in the same timeframe.

The flux in sales for iOS (top) and Android (bottom)

People are still discovering the game, and every week it seems a new YouTube personality picks it up and the game gets a spike in interest. It’s the perfect streaming game for sure, and it’s hilarious to see everyone’s first rush hour.

Steam Total

So the total so far for Steam? After the normal revenue share and such, I’ve made over $130,000 in just three months on Steam. Typing that number still makes me shake my head. That’s impossible! No way. No. Way.

The grand total of units sold across all platforms is 52,539. That’s so much larger than anything I could have imagined, and so far daily sales average around 40-60 copies sold per day across all platforms (excluding sales/major promotional days).

PC and iOS were the big winners for me, but Linux is by far the least, with only one copy sold on the Ubuntu store and a little more than a thousand on Steam. It will be interesting to see if the Steam Machines and Steam OS will help improve Linux penetration.

What’s Next?

The future of Cook, Serve, Delicious! is winding down, but not quite over. Today I’ve launched an iPhone 5/iPod Touch 5 version of CSD, which was mainly done for fun and not really advertised in any major way. It should be interesting to see how that does, as I’m hoping for at least $500 in sales this month from that version alone.

I’ve also looked into porting CSD onto consoles, but that has been a fruitless endeavor. Talking with various “porting” studios, one wanted to port the game but felt a massive retooling was needed to make the game much easier and friendlier to casual gamers. NOPE. (that was especially odd to hear given that they’ve ported over a lot of hardcore/challenging games to Playstation systems.)

Another company, despite one of the devs in that company really enjoying the game and wanting the higher ups to get back with me, never did. One dev team was eager to do it, but lacked the reverse engineering to make it a simple port, which would mean they’d have to code it from scratch…a significantly difficult task that would involve too much of my time at this point since I’m already looking towards my next game.

It doesn’t look like the game will be ported to consoles, which at this point I’m ok with, since I’m already making plenty of money on Steam and ready to move on to new games. It was going to be more of a fun release, but what can I do at this point.

The Future

With the money made on Steam I’ve been able to boost the budget of my next game by nearly four times the original amount, allowing me to bypass Kickstarter and Early Access so that I can make a fully complete game. I hope to debut the game in March, and I feel this game has an opportunity to be a huge release, but only if I can make it a great game of course. I think I can do it.

If you’ve read all my previous “How Much do Game Devs Make” articles spanning the last few years, you know how hard it’s been to get where I am today. Most of the time I wanted to quit. There were times when I had to go back to work since games weren’t supporting me financially. Times that I didn’t have a plan B, and that really terrified me, because I didn’t want to do anything else in my life but make games. Now I can.

Or at least, I can for the immediate future. You’re only as good as your last game, after all.

Note: this is a series that has started several years ago, so be sure to read part 1, part 2, part 3 and part 4 to get the most out of this article!

[David Galindo wrote this with sister site Gamasutra's free blogs]
14 Jan 02:20

To aspiring indie devs - Forget what you've read

by Staff

folmer k image.jpg[by Folmer Kelly]

(note: nothing I write here is aimed at anyone in particular. If you feel offended, it's whatever. I probably felt offended by your shit. Also, this piece was originally posted on sagacityapp.com)

I know, I know; you're getting this a lot right now.

"10 tips for aspiring indies!"

"15 hard truths about being an indie dev!"

"8 things they don't tell you in video game school!"

I guess maybe it comes with the time of the year. Best Of lists show up all over the place, the IGF nominations are in, we start looking back and I guess that means we also start looking forward.

Before I released my first game, I used to eat that shit up. I read all of them and I believed every word simply because they were written by people who had done the thing I wanted to do but hadn't. These people were better than me by virtue of having done what to me seemed to be the impossible: They had made a video game. They were gods.

I wasn't going to question gods.

It wasn't just that though. The second part of it is, pretty much all of these articles list the same fucking things. And every time I read them, they became a little more true in my head. The facts cemented, then hardened, then they were stuck. Universal truths.

And that messed me up for a long time. Which brings us to the reason I'm writing this, and I'm hoping that anyone who is looking for "ASPIRING INDIE ADVICE" reads this because I refuse to believe I was the only one who ever fell into this poison mind state, or the only one who ever will.

It messed me up because after I released my first game, and then my second, and then my third, most of the shit I had "learned" turned out to be false. BUT: Because it was all written by people with experience, and because so many of those bullet points and snappy one-liners had been on a constant loop in my mind, I thought the problem was me, the problem was my games. Oh shit, I was making games wrong!

But then after I made my fourth, fifth... I started getting suspicious. And eventually I smashed the shit out of those cemented facts.

So let me run through some of the things that made me feel like I wasn't a "real" game making person for way too long:

1) Your first 10 games will be awful.

This is the bastard that really did a number on me. I read it everywhere. Your first 10 games will suck. Usually after that you get something like "so get them out of the way as soon as possible".

My first game was a minimalist platformer called ROOD. For some reason I thought it would be neat to put some secrets in there, stuff like if you walk into the wrong direction and jump over invisible blocks - and invisible gaps that will kill you- you get to find some super hard extra levels.

I released it and didn't think twice about anyone EVER finding out about those secrets, because, y'know, i had 9 more shitty games to make before people would start enjoying anything I did. To my surprise, some people DID find the secret levels, and even beat the ultra hard bonus levels.

My second game was called A Bat Triggered The Sensor That Activated The Defense Systems. I decided to put the game up for sale on flashgamelicense.com not because I thought I could sell it, but because I wanted the experience of being on the site. For later. Y'know, after I had made 8 more shitty games so I could start making good ones. To my surprise, I found a sponsor within a week.

My third game, if I'm remembering correctly, was a platformer called Underneath, which still (I just double-checked the Mochi ads to confirm this) gets played on portals daily. I didn't put it on FGL for sponsorship because, y'know, I still had 7 more bad games to go. I regret that now.

I could go on but now it's starting to feel like showboating. The point is: Your first 10 games don't have to be bad. You are allowed to see them as good games. Some people might love them. Someone in the world might think it's GOTY. There is absolutely no reason whatsoever to think your first 10, or whatever number, of games will be bad. If they are bad (whatever that means to you), and you hate them, or no one cares about them, that sucks. But there's no rule that your first 10 games HAVE to be bad.

2) Start small.

Well, I did, and if you have a crazy-ass MMORPG planned out and no skills to make it real, then sure you would indeed be an idiot not to take this advice.

But with that said, if you are the type of person who wants to make a game that requires a lot of content / time / effort, and you believe you can make it happen, don't let anyone tell you otherwise.

3) What works for us might not work for you.

Seriously y'all I just got lucky lol. Aw shucks I have no idea what I'm doing really!

Fuck that. I get that indies like to be humble and shit (heck I just apologised for showboating a couple of sentences back), but this one always made me feel like it was impossible to make anything happen with indie games for the longest time, like any sponsorship or sale I got was down to dumb luck.

Look, if you have ideas to make and the skills to make them well, you can make shit happen.Yeah luck factors in, yeah networking and connections factor in, but guess what's a breeding ground for both luck and connections?

Yeah. The stuff you make before you have either.

4) Marketing is super important and its own job and a science and

Really though? Chances are if you're into indie games, you're on the internet and exposed to things like twitter and indie gaming websites.(*) And that means you're exposed to all the marketing tools you need, and they're not hard to reach. Just posting a picture on twitter with the #screenshotsaturday hashtag can get you coverage on places like Indie Statik. As an aspiring indie dev, that's seriously all you need to concern yourself with.

(*= I might be off-base on this, I don't see how anyone could get into indie games without the internet but if I'm wrong let me know)

5) Prototype first, worry about art / music / etc later.

This one wrecked me for a while. I come from a graphic design background so for me it's much more pleasant to start with graphics and work from there. For some people making the music might be what inspires the rest of the game. There's no wrong approach. You're not less of a game making person if you don't start with code.

For me personally, moving placeholder boxes around just isn't inspiring, but I tried doing it that way for a while because I read that I should. I got way more productive once I decided my own workflow.

Alright! I think that's everything. Let me wrap this up by saying that these are my experiences, by no means am I saying the advice I've singled out and disagree with can't be valuable to you. It can be, and I hope it is. But if you start making games and your experiences don't line up with what those lists tell you, just know it's not your fault.

In closing, I'd be remiss if I didn't mention this: There is one thing on those lists, usually right at the top, that I fully agree with 100%. It has helped me immensely, and I think it's probably the only thing an aspiring indie dev really needs to know:

Make something and release it

[Folmer Kelly of Sets and Settings wrote this using sister site Gamasutra's free blogs]

07 Jan 22:36

What’s the Smallest SQL Server You Should Build?

by Brent Ozar

Before we pick CPUs or memory, let’s start by looking at SQL Server 2012′s licensing costs:

Sticker Price for SQL Server 2012

Sticker Price for SQL Server 2012

These are MSRP prices and do not include the extra costs for Software Assurance (maintenance) or any discounts for being a school, non-profit, or those photos you have of Bill Gates.

Physical, Standard Edition – you’ll notice that I went with 2 processors rather than 1. Technically, if you’re dying to save money, you could get away with installing SQL Server on a server with a single CPU, but if you want to save that much money, go virtual and avoid the hardware costs altogether. I could have also gone with 2 dual-core processors, but I only know of one dealer still selling those, and we’re talking about new servers here.

Physical, Enterprise Edition – that’s $6,874 per core, so it adds up fast.

Virtual, Standard Edition – here we’re using just 4 cores, the minimum license size Microsoft allows for a new server. You can build smaller ones (and I do), but as long as you’re licensing with Standard Edition, you’re paying per guest, and the minimum cost is $7,172.

Virtual, Enterprise Edition – if you really need Enterprise features in a virtual machine, you’ll most likely be running multiple SQL Server VMs. In that scenario, you’re best off licensing Enterprise Edition at the host level, and then you can run an unlimited number of SQL Server VMs on that host. (When I say “unlimited”, I’m using it the same way your cell phone company tells you that you have unlimited Internet.)

My failover cluster lab circa 2011. Feel the power of SATA.

My failover cluster lab circa 2011. Feel the power of SATA.

When I’m designing servers, I start with the licensing discussion because it helps everyone focus on the real cost of the server. Often folks want to nickel-and-dime their way into 16GB of RAM and a pair of SATA drives, but once licensing costs come into play, they realize architecture here is different. Our goal is to absolutely minimize the number of cores involved – ideally deploying virtual machines as often as we can – and then when we need to go physical, we get serious about the hardware, because this stuff ain’t cheap.

Now Let’s Talk Hardware

Once you’ve picked your licensing and physical vs virtual, let’s talk hardware. I’m using a major server vendor, but the exact brand isn’t important – you can get similar pricing from the hardware vendor of your choice, and this post isn’t about making brand decisions.

Gentlemen, start your caps lock.

Gentlemen, start your caps lock.

Notice how the numbers are displayed as total, licensing, and hardware? That’s how you need to present them to management. When a manager looks at those physical server numbers, the hardware is still clearly the cheapest part of this transaction. If they want to drive costs down, they can start by asking why this SQL Server needs to be physical – the real way to save money here is to drop down to the Virtual column.

Again, these are public sticker prices here based off the hardware vendor’s web site, and don’t include the extra costs of Windows, management software, or volume discounts. These prices also don’t include the cost of the drive space for the data and log files. Your choice between shared storage (SANs), local SSD, or local magnetic drives varies widely between shops, so I’m leaving that out. Let’s just focus on the basic hardware at first.

Physical, Standard Edition – this is a 2-CPU rack mount server with the fastest quad-core processors available right now, 96GB of the fastest memory, a pair of magnetic hard drives for Windows, and a pair of vendor-supplied-and-supported solid state drives for TempDB.

“BUT BRENT! YOU TOLD ME THIS WAS THE SMALLEST SERVER YOU’D DEPLOY, AND THAT SOUNDS LIKE AN INCREDIBLE SPEED MACHINE!!!1! WHAT ARE YOU THINKING? ARE YOU TROLLING ME AGAIN LIKE YOU DID WITH THE FRAGMENTATION POST?”

No, this is actually what I recommend to clients. You don’t waste dry cleaning money on your dad jeans, and you don’t run $14k worth of software on $3k worth of hardware. Besides, you want this thing to last for a few years, right? You don’t want to come running back to this machine again and again trying to fix performance problems that could be fixed with a basic injection of memory.

Physical, Enterprise Edition – the exact same 2-CPU box with the same processors, but upgraded to 384GB of memory and four 400GB SSDs for TempDB.

“BRENT ZOMG YOU MUST BE JOKING THAT’S CRAZY FAST, LIKE KEN BLOCK’S FORD FIESTA FAST. NOBODY NEEDS THAT MUCH MEMORY EVER, BILL GATES TOLD ME SO!!!!ONE”

Yes, my incredulous caps-loving friend, because we need to stay focused on the $55k worth of licensing costs, the dozens (hundreds? thousands?) of employees who rely on this server every day, and the capabilities in Enterprise Edition. Right now, the $55k of licensing you bought is being wasted on crappy hardware that’s more like Stason Lee’s Ford Fiesta.

Virtual, Standard Edition – since you’re licensing by the guest, you don’t have to buy an additional host for every new SQL Server you deploy. You can just mix these in with the rest of your virtualization farm and incur incremental costs. It’s certainly not free, but it’s nowhere near as expensive as a dedicated physical box.

Virtual, Enterprise Edition – since we’re going to license this at the host level, we generally only want to run SQL Server virtual machines on this host. Any other guests here are just wasting my very valuable CPU cycles and memory – at $55k for the licensing, I need to keep this focused just on SQL Server. Because of that, I’ve got a host equipped with the same power as my Physical Enterprise Edition spec – fast cores, high memory, and some local solid state to make a vSAN deployment easier for my VMware admins. (Disclaimer: when building out a real VMware host, I’d obviously tweak this – I’d end up with 10Gb Ethernet and/or FC HBAs, for example, but that depends on each shop’s network infrastructure.)

Yes, it’s another shock-and-awe post from Brent.

If you’re building physical boxes with 16GB of memory, and then you find yourself repeatedly going back to those boxes to do performance troubleshooting, the problem isn’t the app or the server or the memory.

The problem is you, and your old-school 16GB memory fixation.

You need to step back and look at the whole picture – licensing, business needs, RPO/RTO – and stop trying to save a few bucks in ways that hurt the server, the business, and your weekends.

...
There are tons of free SQLSaturdays coming up. Find one near you.

07 Jan 03:04

Getting Started with Arduino–The Maker Faire Edition

by John Baichtal
Screen shot 2014-01-06 at 3.40.46 PMIf you were dying to learn about Arduinos last September, the place to be would have been Maker Faire, at the Electronics Stage, where my colleague Andrew Terranova taught four crowds of visitors what those handy microcontrollers were all about.

Read more on MAKE

02 Dec 10:55

Inside the Facebook Snapchat phishing scam

by Troy Hunt

I’m frequently amused by the sort of stuff my Facebook friends “like”. For example:

[redacted friend] likes "Leaked Snapchats 18+"

The more salacious content you find around Facebook often has a hidden agenda, for example the classic She did WHAT in school scam I wrote about last year. Snapchat allows you to take a pic or a video and set an expiry date after which it’s “theoretically” destroyed, just the sort of stuff that appeals to sexting teens. By extension, “leaked” Snapchats are just the sort of stuff that appeal to a whole different audience.

Looking at the Leaked Snapchats 18+ page on Facebook, we can see it’s rather popular:

106.1K likes

Not bad for a fortnight old page! The 106k odd likes are legit too, at least insofar as it’s genuinely that many Facebook accounts that like the page. Why is this 10k more than the likes in the first image in this post? Because I took this image today (Monday) and the earlier image only two days ago. Popular indeed.

But what’s this – they can’t show the uncensored versions of the photos on Facebook – where’s the fun in that?!

Need to go to http://leaked-snapchatz.com for uncensored photos

We’d better follow the link to their site:

Need to logon to Facebook again

Ah, better log back into Facebo… hold on a minute, wasn’t I already logged in?! I’ve been solely in the iOS app until now, let’s just switch over to Chrome on the desktop and take a look:

Chrome reporting a phishing warning for the "Facebook" login page

Ah. Right.

What’s interesting about this is that in the context where people are most frequently using Facebook (i.e. on their phone), there’s zero phishing protection. You’re on your own.

Anyway, let’s stay in Chrome and take a look at the source code. The site above is nothing more than a frame which then embeds a page from http://iphonecompetitions.org/lolscope2.html which also fires off Chrome’s phishing warning. The URL gives you a sense that we’re probably not about to see what was originally promised, indeed this is just the Facebook logon phishing page. If we jump back to the root of the site there’s a directory listing and the only resource that discloses of any interest is http://iphonecompetitions.org/iphone:

iPhone sweepstakes page

There’s a point to be made here about the multi-purposing of sites for various scams.

In terms of the phishing page itself, most of the content is loaded up off Facebook’s own CDN. For example, this image sprite and this style sheet are both off the same fbstatic-a.akamaihd.net Akamai CDN domain. I’ve often said there’s a lot a smart site like Facebook could do to put a stop to scammers just be restricting the use of their assets on other domains (a simple referrer check).

“Logging in” to the Facebook phishing page posts to http://iphonecompetitions.org/socialscope.php which then redirects to a rather erroneous error page:

Error page after "logging in" to the Facebook phishing page

One of the other pitfalls of browsing the mobile web compared to desktop is that there was no ability to inspect a certificate before handing over the creds like you can easily do on the desktop. This is another factor that increases the risk of falling victim to a scam like this whilst on your phone.

Speaking of which, how many people do fall for this sort of thing? 699 apparently – let me demonstrate. A quick Google for results on the iphonecompetitions.org domain reveals this:

Facebook accounts exposed in publicly accessible log

Bugger. If only scammers would learn how to apply proper access controls! They’ve since attempted to do just this and you now get a nice little 403 “Access Denied” when attempting to access the siphoned credentials URL. You’ve gotta marvel at the logic behind how this was secured – “I know, we’ll just make a really random URL then nobody will find it!” Inevitably this was disclosed to the Googles through the aforementioned directory browsing being enabled on the site.

So who’s behind this? Looking into the domain, it has the expected privacy controls in place to conceal the identity of the registrant. The original leaked-snapchatz.com domain also has privacy controls in place, albeit by a different provider. They’re also using different DNS providers although a reverse DNS lookup on each shows the presence of such other domains as samsunggiveaways.com and hotbikinibitch.info so it’s pretty clear there’s a bit of a pattern here albeit one hidden behind layer upon layer of obfuscation. If there’s one thing I’ve learned about these scams over the years, it’s that they’re often entwined with multiple other scams run by different people in a tangled web of deceit (exactly what did you think the people responding to ads for “make $x working from home” are actually doing?!)

In searching for info on this scam, it turns it out that the whole “leaked snapchats” is a bit of a thing. Probably unsurprising given the common use-case for the service and equally unsurprising that it’s been turned into credential harvesting on Facebook.

If there was any doubt as to the lengths scammers will go to in order to harvest your credentials, just before posting this today I took another look at the page and noticed the following update:

The death of Paul Walker

At the time of writing it had 447 “likes” and 27 comments, including this one from Snapshare, a different account that only appears to have popped up less than 24 hours ago and has been commenting on each post made on the page:

"Snapshare" account encouraging people to "like and share"

Within hours of his tragic death, this Facebook account phishing page was already using the demise of Paul Walker to attract new victims and encouraging them to “like and share” the page. The objective is self-propagation; if people like me see friends like mine liking or commenting then it’s a free ad for the scam. The likes of Nasir, Ben and Anthony in the image above have just given a credential harvesting scam a free plug by making sincere comments about a horrific event.

The morals of the story are as follows: There are numerous Facebook pages that are nothing more than fronts for credential harvesting or other scams. The heavy use of social media via mobile apps which don’t provide the same degrees of phishing protection as you find in browsers on the desktop increases the efficacy of these scams. Anything that attracts new victims is fair game, even if it means prospering from the death of others. And finally, if you really want free porn, just Google for it rather than handing over your Facebook credentials!

Update, 3 Dec 2013: Less than 24 hours later and the page is gone – good riddance! Inevitably we’ll see it replaced by others but at least the “credibility” this one built up via likes and comments is now gone.

01 Dec 12:30

Human Computer Interaction Diversity - 7 Devices that AREN'T a mouse and keyboard

by Scott Hanselman
Mix it up a little...try interacting with your computer with something OTHER than a Mouse and Keyboard
 

One of the most wonderful and least appreciated things about computers is diversity of devices. You're probably interacting with your computer with a keyboard and mouse. But in the last few years, you may have added touch and, to a limited extent, voice.

The photo above is of my desk. Yes, it's messy. On it are the things I use to work with my computer. I use these nearly every day and at least every week or they wouldn't last on my desk.

They are:

and not pictured

I think we all should consider our workflows and consider what devices that aren't a keyboard and mouse that might be better suited for the tasks we perform every day.

Leap Motion

I initially gave this product a 10 for concept and a 0 for execution. I'll give it a 3 now...but it's getting better. It's still not well suited for gross motions, but for browsing and scrolling it's at least becoming useful. I keep it on all the time and since I haven't got a touchscreen on my desktop machine (yet) I use it for scrolling while reading and leaning back. It has huge potential and I'm impressed with how often the software updates itself.

I'm using the Touchless for Windows app. The concept is so promising...wave your hands and your computer reacts. I still don't suggest that the Leap Motion is a consumer quality device, but I do use it weekly and turn to its promise often.

LeapMotion Visualizer

Wacom Tablet

Tablets are the gold standard for interacting with Adobe products like Illustrator and Photoshop. I learned Photoshop on a Tablet many years ago and I still prefer using one today. The Wacom Bamboo also has touch support which is a bonus, although I use my Logitech N650 Touchpad as a trackpad as it's more sensitive (to touch).

If you're trying to draw or paint without a stylus like a Wacom, you're truly missing out. They are surprisingly affordable, too.

Wacom Applet

ShuttlePRO v2

My buddy at Channel 9 Larry Larsen turned me on to the ShuttlePRO for video editing. I don't know what I did without it. It's got program-sensitive programmable keys. That means their function changes depending on what's running. I can mark a key as "Split" or "Play" but the ShuttlePro software will automatically use the right hotkey depending on if I'm using Audition or Premiere. Some nice gent even made settings for Camtasia Studio and the ShuttlePro v2. If you do screencasts or video editing like I do, a shuttle is a must.

ShuttlePro Config

Logitech TouchPad

You can get a Logitech T650 Touchpad for less then $35 if you look around. It's a large, gorgeous class touch area that's also wireless. If you have Logitech products already you can use the Unifying Receiver you may already have. I have as USB hub and it works just fine.

I use it to two-finger scroll, pinch to zoom, and all the things that MacBook touchpad folks take for granted. You can also use it with Windows 8 to "swipe in" and task switch. I move between my mouse and this touch pad to reduce repetition and wrist strain with the mouse, but also sometimes just because I'm in the mood. It's a great Trackpad/Touchpad that can ease the transition if you have trouble moving from a laptop and a desktop.

Logitech Touchpad

Great Webcam and Speaker Phone

I adore the Logitech BCC950 Conference Cam so much that I've written software to remotely control its motorized Pan-Tilt-Zoom functionality and use it as an Auto Answer Kiosk. Not only is it a great web cam that I use every day, but it's also a fantastic speaker phone for conferences. It shows up as just another audio device that you can set as the default for communication but not your default speaker. This means Lync and Skype calls come from this device, but your regular computer audio doesn't. Sound quality is killer, confirmed by everyone I've talked to with it.

image

Aside: There's so much untapped usefulness in just a webcam but most programs just don't use it. Have you seen the Windows 8 "Food and Drink" app? You likely already have this app. It has a Hands-Free mode for cooking. You know when you're using a tablet to show a recipe? It uses just the data from your webcam to see your hand wave to move to the next page. Very cool.

Kinect for Windows

The Kinect SDK was updated last year with support for "Near Mode" which is basically "sitting at your desk mode." This update made programming to the Kinect for Windows a LOT more useful for the desktop. Writing apps is fairly easy, like this example where you can control PowerPoint with your hands. With apps like KinEmote you can use the Kinect to control your XBMC media installation and lots more.

Another little known fact is that the Kinect on a PC has a very nice quality Array Microphone that can also be used for things like Windows Speech Recognition or Dragon Naturally Speaking.

Me on Kinect

There's such a huge diversity of ways to interact with computers and it's truly just starting. In my lifetime I'm sure computers will be able to detect (guess) if I'm sad or happy, notice my health status, great me when I walk up, and so much more.

What  devices do YOU have plugged into your computer right now?


Sponsor: Big thanks to Red Gate for sponsoring the blog this week! Easy release management - Deploy your .NET apps, services and SQL Server databases in a single, repeatable process with Red Gate’s Deployment Manager. There’s a free Starter edition, so get started now!

Disclaimer: My Amazon links are affiliate links and that the resulting few bucks buys me gadgets and tacos. Mostly tacos.



© 2013 Scott Hanselman. All rights reserved.