|archive - contact - sexy exciting merchandise - search - about|
|← previous||September 12th, 2016||next|
September 12th, 2016: Jughead #9 is out this now! I wrote it, I get to write Jughead comics now! You can read a preview RIGHT HERE
[Epistemic status: Very speculative. I am not a neuroscientist and apologize for any misinterpretation of the papers involved. Thanks to the people who posted these papers in r/slatestarcodex. See also Mysticism and Pattern-Matching and Bayes For Schizophrenics]
Bayes’ Theorem is an equation for calculating certain kinds of conditional probabilities. For something so obscure, it’s attracted a surprisingly wide fanbase, including doctors, environmental scientists, economists, bodybuilders, fen-dwellers, and international smugglers. Eventually the hype reached the point where there was both a Bayesian cabaret and a Bayesian choir, popular books using Bayes’ Theorem to prove both the existence and the nonexistence of God, and even Bayesian dating advice. Eventually everyone agreed to dial down their exuberance a little, and accept that Bayes’ Theorem might not literally explain absolutely everything.
So – did you know that the neurotransmitters in the brain might represent different terms in Bayes’ Theorem?
First things first: Bayes’ Theorem is a mathematical framework for integrating new evidence with prior beliefs. For example, suppose you’re sitting in your quiet suburban home and you hear something that sounds like a lion roaring. You have some prior beliefs that lions are unlikely to be near your house, so you figure that it’s probably not a lion. Probably it’s some weird machine of your neighbor’s that just happens to sound like a lion, or some kids pranking you by playing lion noises, or something. You end up believing that there’s probably no lion nearby, but you do have a slightly higher probability of there being a lion nearby than you had before you heard the roaring noise. Bayes’ Theorem is just this kind of reasoning converted to math. You can find the long version here.
This is what the brain does too: integrate new evidence with prior beliefs. Here are some examples I’ve used on this blog before:
All three of these are examples of top-down processing. Bottom-up processing is when you build perceptions into a model of the the world. Top-down processing is when you let your models of the world influence your perceptions. In the first image, you view the center letter of the the first word as an H and the second as an A, even though they’re the the same character; your model of the world tells you that THE CAT is more likely than TAE CHT. In the second image, you read “PARIS IN THE SPRINGTIME”, skimming over the duplication of the word “the”; your model of the world tells you that the phrase should probably only have one “the” in it (just as you’ve probably skimmed over it the three times I’ve duplicated “the” in this paragraph alone!). The third image might look meaningless until you realize it’s a cow’s head; once you see the cow’s head your model of the world informs your perception and it’s almost impossible to see it as anything else.
(Teh fcat taht you can siltl raed wrods wtih all the itroneir ltretrs rgraneanrd is ahonter empxlae of top-dwon pssirocneg mkinag nsioy btotom-up dtaa sanp itno pacle)
But top-down processing is much more omnipresent than even these examples would suggest. Even something as simple as looking out the window and seeing a tree requires top-down processing; it may be too dark or foggy to see the tree one hundred percent clearly, the exact pattern of light and darkness on the tree might be something you’ve never seen before – but because you know what trees are and expect them to be around, the image “snaps” into the schema “tree” and you see a tree there. As usual, this process is most obvious when it goes wrong; for example, when random patterns on a wall or ceiling “snap” into the image of a face, or when the whistling of the wind “snaps” into a voice calling your name.
Most of the things you perceive when awake are generated from very limited input – by the same machinery that generates dreams with no input
— Void Of Space (@VoidOfSpace) September 2, 2016
Corlett, Frith & Fletcher (2009) (henceforth CFF) expand on this idea and speculate on the biochemical substrates of each part of the process. They view perception as a “handshake” between top-down and bottom-up processing. Top-down models predict what we’re going to see, bottom-up models perceive the real world, then they meet in the middle and compare notes to calculate a prediction error. When the prediction error is low enough, it gets smoothed over into a consensus view of reality. When the prediction error is too high, it registers as salience/surprise, and we focus our attention on the stimulus involved to try to reconcile the models. If it turns out that bottom-up was right and top-down was wrong, then we adjust our priors (ie the models used by the top-down systems) and so learning occurs.
In their model, bottom-up sensory processing involves glutamate via the AMPA receptor, and top-down sensory processing involves glutamate via the NMDA receptor. Dopamine codes for prediction error, and seem to represent the level of certainty or the “confidence interval” of a given prediction or perception. Serotonin, acetylcholine, and the others seem to modulate these systems, where “modulate” is a generic neuroscientist weasel word. They provide a lot of neurological and radiologic evidence for these correspondences, for which I highly recommend reading the paper but which I’m not going to get into here. What I found interesting was their attempts to match this system to known pharmacological and psychological processes.
CFF discuss a couple of possible disruptions of their system. Consider increased AMPA signaling combined with decreased NMDA signaling. Bottom-up processing would become more powerful, unrestrained by top-down models. The world would seem to become “noisier”, as sensory inputs took on a life of their own and failed to snap into existing categories. In extreme cases, the “handshake” between exuberant bottom-up processes and overly timid top-down processes would fail completely, which would take the form of the sudden assignment of salience to a random stimulus.
Schizophrenics are famous for “delusions of reference”, where they think a random object or phrase is deeply important for reasons they have trouble explaining. Wikipedia gives as examples:
– A feeling that people on television or radio are talking about or talking directly to them
– Believing that headlines or stories in newspapers are written especially for them
– Seeing objects or events as being set up deliberately to convey a special or particular meaning to themselves
– Thinking ‘that the slightest careless movement on the part of another person had great personal meaning…increased significance’
In CFF, these are perceptual handshake failures; even though “there’s a story about the economy in today’s newspaper” should be perfectly predictable, noisy AMPA signaling registers it as an extreme prediction failure, and it fails its perceptual handshake with overly-weak priors. Then it gets flagged as shocking and deeply important. If you’re unlucky enough to have your brain flag a random newspaper article as shocking and deeply important, maybe phenomenologically that feels like it’s a secret message for you.
And this pattern – increased AMPA signaling combined with decreased NMDA signaling – is pretty much the effect profile of the drug ketamine, and ketamine does cause a paranoid psychosis mixed with delusions of reference.
Organic psychosis like schizophrenia might involve a similar process. There’s a test called the binocular depth inversion illusion, which looks like this:
The mask in the picture is concave, ie the nose is furthest away from the camera. But most viewers interpret it as convex, with the nose closest to the camera. This makes sense in terms of Bayesian perception; we see right-side-in faces a whole lot more often than inside-out faces.
Schizophrenics (and people stoned on marijuana!) are more likely to properly identify the face as concave than everyone else. In CFF’s system, something about schizophrenia and marijuana messes with NMDA, impairs priors, and reduces the power of top-down processing. This predicts that schizophrenics and potheads would both have paranoia and delusions of reference, which seems about right.
Consider a slightly different distortion: increased AMPA signaling combined with increased NMDA signaling. You’ve still got a lot of sensory noise. But you’ve also got stronger priors to try to make sense of them. CFF argue these are the perfect conditions to create hallucinations. The increase in sensory noise means there’s a lot of data to be explained; the increased top-down pattern-matching means that the brain is very keen to fit all of it into some grand narrative. The result is vivid, convincing hallucinations of things that are totally not there at all.
LSD is mostly serotonergic, but most things that happen in the brain bottom out in glutamate eventually, and LSD bottoms out in exactly the pattern of increased AMPA and increased NMDA that we would expect to produce hallucinations. CFF don’t mention this, but I would also like to add my theory of pattern-matching based mysticism. Make the top-down prior-using NMDA system strong enough, and the entire world collapses into a single narrative, a divine grand plan in which everything makes sense and you understand all of it. This is also something I associate with LSD.
If dopamine represents a confidence interval, then increased dopaminergic signaling should mean narrowed confidence intervals and increased certainty. Perceptually, this would correspond to increased sensory acuity. More abstractly, it might increase “self-confidence” as usually described. Amphetamines, which act as dopamine agonists, do both. Amphetamine users report increased visual acuity (weirdly, they also report blurred vision sometimes; I don’t understand exactly what’s going on here). They also create an elevated mood and grandiose delusions, making users more sure of themselves and making them feel like they can do anything.
(something I remain confused about: elevated mood and grandiose delusions are also typical of bipolar mania. People on amphetamines and other dopamine agonists act pretty much exactly like manic people. Antidopaminergic drugs like olanzapine are very effective acute antimanics. But people don’t generally think of mania as primarily dopaminergic. Why not?)
CFF end their paper with a discussion of sensory deprivation. If perception is a handshake between bottom-up sense-data and top-down priors, what happens when we turn the sense-data off entirely? Psychologists note that most people go a little crazy when placed in total sensory deprivation, but that schizophrenics actually seem to do better under sense-deprivation conditions. Why?
The brain filters sense-data to adjust for ambient conditions. For example, when it’s very dark, your eyes gradually adjust until you can see by whatever light is present. When it’s perfectly silent, you can hear the proverbial pin drop. In a state of total sensory deprivation, any attempt to adjust to a threshold where you can detect the nonexistent signal is actually just going to bring you down below the point where you’re picking up noise. As with LSD, when there’s too much noise the top-down systems do their best to impose structure on it, leading to hallucinations; when they fail, you get delusions. If schizophrenics have inherently noisy perceptual systems, such that all perception comes with noise the same way a bad microphone gives off bursts of static whenever anyone tries to speak into it, then their brains will actually become less noisy as sense-data disappears.
(this might be a good time to remember that no congentally blind people ever develop schizophrenia and no one knows why)
Lawson, Rees, and Friston (2014) offer a Bayesian link to autism.
(there are probably a lot of links between Bayesians and autism, but this is the only one that needs a journal article)
They argue that autism is a form of aberrant precision. That is, confidence intervals are too low; bottom-up sense-data cannot handshake with top-down models unless they’re almost-exactly the same. Since they rarely are, top-down models lose their ability to “smooth over” bottom-up information. The world is full of random noise that fails to cohere into any more general plan.
Right now I’m sitting in a room writing on a computer. A white noise machine produces white noise. A fluorescent lamp flickers overhead. My body is doing all sorts of body stuff like digesting food and pumping blood. There are a few things I need to concentrate on: this essay I’m writing, my pager if it goes off, any sorts of sudden dramatic pains in my body that might indicate a life-threatening illness. But I don’t need to worry about the feeling of my back against the back fo the chair, or the occasional flickers of the fluorescent light, or the feeling of my shirt on my skin.
A well-functioning perceptual system gates out those things I don’t need to worry about. Since my shirt always feels more or less similar on my skin, my top-down model learns to predict that feeling. When the top-down model predicts the shirt on my skin, and my bottom-up sensation reports the shirt on my skin, they handshake and agree that all is well. Even if a slight change in posture makes a different part of my shirt brush against my skin than usual, the confidence intervals are wide: it is still an instance of the class “shirt on skin”, it “snaps” into my shirt-on-skin schema, and the perceptual handshake goes off successfully, and all remains well. If something dramatic happens – for example my pager starts beeping really loudly – then my top-down model, which has thus far predicted silence – is rudely surprised by the sudden burst of noise. The perceptual handshake fails, and I am startled, upset, and instantly stop writing my essay as I try to figure out what to do next (hopefully answer my pager). The system works.
The autistic version works differently. The top-down model tries to predict the feeling of the shirt on my skin, but tiny changes in the position of the shirt change the feeling somewhat; bottom-up data does not quite match top-down prediction. In a neurotypical with wide confidence intervals, the brain would shrug off such a tiny difference, declare it good enough for government work, and (correctly) ignore it. In an autistic person, the confidence intervals are very narrow; the top-down systems expect the feeling of shirt-on-skin, but the bottom-up systems report a slightly different feeling of shirt-on-skin. These fail to snap together, the perceptual handshake fails, and the brain flags it as important; the autistic person is startled, upset, and feels like stopping what they’re doing in order to attend to it.
(in fact, I think the paper might be claiming that “attention” just means a localized narrowing of confidence intervals in a certain direction; for example, if I pay attention to the feeling of my shirt on my skin, then I can feel every little fold and micromovement. This seems like an important point with a lot of implications.)
Such handshake failures match some of the sensory symptoms of autism pretty well. Autistic people dislike environments that are (literally or metaphorically) noisy. Small sensory imperfections bother them. They literally get annoyed by scratchy clothing. They tend to seek routine, make sure everything is maximally predictable, and act as if even tiny deviations from normal are worthy of alarm.
They also stim. LRF interpret stimming as an attempt to control sensory predictive environment. If you’re moving your arms in a rhythmic motion, the overwhelming majority of sensory input from your arm is from that rhythmic motion; tiny deviations get lost in the larger signal, the same way a firefly would disappear when seen against the blaze of a searchlight. The rhythmic signal which you yourself are creating and keeping maximally rhythmic is the most predictable thing possible. Even something like head-banging serves to create extremely strong sensory data – sensory data whose production the head-banger is themselves in complete control of. If the brain is in some sense minimizing predictive error, and there’s no reasonable way to minimize prediction error because your predictive system is messed up and registering everything as a dangerous error – then sometimes you have to take things into your own hands, bang your head against a metal wall, and say “I totally predicted all that pain”.
(the paper doesn’t mention this, but it wouldn’t surprise me if weighted blankets work the same way. A bunch of weights placed on top of you will predictably stay there; if they’re heavy enough this is one of the strongest sensory signals you’re receiving and it might “raise your average” in terms of having low predictive error)
What about all the non-sensory-gating-related symptoms of autism? LRF think that autistic people dislike social interaction because it’s “the greatest uncertainty”; other people are the hardest-to-predict things we encounter. Neurotypical people are able to smooth social interaction into general categories: this person seems friendly, that person probably doesn’t like me. Autistic people get the same bottom-up data: an eye-twitch here, a weird half-smile there – but it never snaps into recognizable models; it just stays weird uninterpretable clues. So:
This provides a simple explanation for the pronounced social-communication difficulties in autism; given that other agents are arguably the most difficult things to predict. In the complex world of social interactions, the many-to-one mappings between causes and sensory input are dramatically increased and difficult to learn; especially if one cannot contextualize the prediction errors that drive that learning.
They don’t really address differences between autists and neurotypicals in terms of personality or skills. But a lot of people have come up with stories about how autistic people are better at tasks that require a lot of precision and less good at tasks that require central coherence, which seems like sort of what this theory would predict.
LRF ends by discussing biochemical bases. They agree with CFF that top-down processing is probably related to NMDA receptors, and so suspect this is damaged in autism. Transgenic mice who lack an important NMDA receptor component seem to behave kind of like autistic humans, which they take as support for their model – although obviously a lot more research is needed. They agree that acetylcholine “modulates” all of this and suggest it might be a promising pathway for future research. They agree with CFF that dopamine may represent precision/confidence, but despite their whole spiel being that precision/confidence is messed up in autism, they don’t have much to say about dopamine except that it probably modulates something, just like everything else.
All of this is fascinating and elegant. But is it elegant enough?
I notice that I am confused about the relative role of NMDA and AMPA in producing hallucinations and delusions. CFF say that enhanced NMDA signaling results in hallucinations as the brain tries to add excess order to experience and “overfits” the visual data. Fine. So maybe you get a tiny bit of visual noise and think you’re seeing the Devil. But shouldn’t NMDA and top-down processing also be the system that tells you there is a high prior against the Devil being in any particular visual region?
Also, once psychotics develop a delusion, that delusion usually sticks around. It might be that a stray word in a newspaper makes someone think that the FBI is after them, but once they think the FBI is after them, they fit everything into this new paradigm – for example, they might think their psychiatrist is an FBI agent sent to poison them. This sounds a lot like a new, very strong prior! Their doctor presumably isn’t doing much that seems FBI-agent-ish, but because they’re working off a narrative of the FBI coming to get them, they fit everything, including their doctor, into that story. But if psychosis is a case of attenuated priors, why should that be?
(maybe they would answer that because psychotic people also have increased dopamine, they believe in the FBI with absolute certainty? But then how come most psychotics don’t seem to be manic – that is, why aren’t they overconfident in anything except their delusions?)
LRF discuss prediction error in terms of mild surprise and annoyance; you didn’t expect a beeping noise, the beeping noise happened, so you become startled. CFF discuss prediction error as sudden surprising salience, but then say that the attribution of salience to an odd stimulus creates a delusion of reference, a belief that it’s somehow pregnant with secret messages. These are two very different views of prediction error; an autist wearing uncomfortable clothes might be constantly focusing on their itchiness rather than on whatever she’s trying to do at the time, but she’s not going to start thinking they’re a sign from God. What’s the difference?
Finally, although they highlighted a selection of drugs that make sense within their model, others seem not to. For example, there’s some discussion of ampakines for schizophrenia. But this is the opposite of what you’d want if psychosis involved overactive AMPA signaling! I’m not saying that the ampakines for schizophrenia definitely work, but they don’t seem to make the schizophrenia noticeably worse either.
Probably this will end the same way most things in psychiatry end – hopelessly bogged down in complexity. Probably AMPA does one thing in one part of the brain, the opposite in other parts of the brain, and it’s all nonlinear and different amounts of AMPA will have totally different effects and maybe downregulate itself somewhere else.
Still, it’s neat to have at least a vague high-level overview of what might be going on.
In the same way that the guest facilities of the Watergate Hotel are not much remembered, neither is the political career of Elbridge Gerry, 9th Governor of Massachusetts and 5th Vice-President of the United States. Both have managed to have their names remembered down the years by having them attached to a particular form of scandal. Thus, every account of potential political wrongdoing and cover-up finds itself with ‘-gate’ stuck on the end of it, and any complaint about changing electoral boundaries is almost certain to call it a Gerrymander. (The original ‘Gerry-mander’ was a constituency for the Massachusetts State Senate, said to resemble a salamander, and drawn in order to bolster the chances of Gerry’s supporters being elected)
‘Gerrymander’ is being thrown around a lot today as the Boundary Commission for England have announced their proposed boundaries for new constituencies in England. As these reflect new rules on the total numbers of MPs (down from 650 to 600) and the way in which constituencies are made up, there are plenty of major changes on the electoral map. Many existing constituency names disappear, others merge and mutate into new ones, and wholly new entities are formed. Compounded to this is the general and ongoing effect of population movement and change in the UK, which means that every boundary review leads to a reduction in ‘Labour constituencies’ and an increase in ‘Conservative constituencies’.
To some, all this represents a gerrymandering of constituencies. To which I say no, this is a gerrymander:
(you might need to click on it to see it in its full ridiculous detail)
That’s how the thirteen congressional districts in North Carolina are allocated. The fourth, ninth and twelfth are all classic examples of the art of gerrymandering, meandering ribbon-like constituencies with only tenuous connections between the various parts of them, but the whole state has been divided up in bizarre and unusual ways to create a certain end result. North Carolina’s not the only state that looks like that – it’s a common feature across the USA, where most states have their boundaries drawn in an explicitly political process run by the state government, not an arms-length boundary commission.
(One point worth making here is that the aim of a successful gerrymander is not to create ‘safe’ seats for the party seeking to benefit from it. If a population is divided 50-50 between Party A and Party B, 50% of the seats where party A wins 90%-10% and 50% seats where Party B wins by the same just gives us a deadlock. However, if Party A can make 75% of the seats ones it’s sure of winning 60-40, Party B can have the remaining 25% of the seats to win 80-20, but will have no chance of winning overall power, despite both parties having the same number of votes.)
The Boundary Commission works within the rules its set by the government (which are flawed) but the constituency boundaries themselves are not gerrymandered. Yes, there are some odd boundaries in there, but that’s almost always going to happen when trying to make natural communities fit within artificially imposed boundaries. The population of the country doesn’t live in a bunch of obvious communities that are all within the electoral quota needed to make a Parliamentary constituency, so boundaries are going to end up doing odd things.
The problem comes from the boundary review being part of a system that’s fundamentally broken at the national level. Claims from the Tories and Labour that the review might under or over-represent them as a result miss a fundamental point: our electoral system massively over-represents both of them. On the present – supposedly unfair to the Tories – boundaries, 37% of the vote got them 51% of the seats, while Labour got 35% of the seats in Parliament with just 30% of the votes and the SNP managed 9% of the seats on just 5% of the vote.
Complaining about gerrymandering in constituency boundaries is truly missing the wood for the trees (or the zoo for the salamander, if we’re trying to keep our metaphors straight). Why bother gerrymandering individual seats, when you’ve already got a system that’s massively biased in favour of you? If you want to reform the process, you need to remember that odd constituency boundaries and reviews like this are a necessary feature of our electoral system, not a bug. If you want a system that truly represents people, don’t get distracted complaining about non-existent gerrymanders, work instead to get us a better electoral system.
We all have our stories on where we were the morning of 9/11/01 when we heard. I don't think I've ever told mine here but it was no more remarkable than yours and maybe less.
I had my phone ringer off and my voicemail poised to answer any calls while I slept. I woke up, staggered to the bathroom and then noticed the number of waiting calls on my answering machine. I think it was something like 14 and I instantly thought, "Something has happened." It could have been very good or very bad, but when I played back the first message, I knew instantly it was in the "very bad" category.
It was from my friend Tracy and she was near hysterics, crying and moaning about "those poor people in New York." But she didn't say what it was that had happened to those poor people in New York. I listened to other messages and got a snatch here and a snatch there of what it was, then I rushed into my office, turned the TV on to CNN and sat there for hours with, I'm sure, the "Springtime for Hitler" look on my face. I was sitting right where I'm sitting now to write this.
I think I started watching about 8:30 AM Pacific Time. That was 11:30 in New York. By that time, the twin towers of the World Trade Center had each been hit. Each had burned for a time. Each had finally collapsed. The Pentagon had been hit. All air travel in the United States had been halted. New York Mayor Rudy Giuliani had ordered evacuations and other emergency efforts. (Whatever happened to that fine, brave man of that morning?)
Most of the shockers were over by the time we West Coasters joined the trembling audience but we didn't know that. We were still wondering: What can happen next? Is there another plane somewhere? Is there more to this? When the unthinkable happens, you brace yourself for more unthinkable things.
I flashed back, as most of us of a certain age have to with moments of tragedy, to 11/22/63 and the news that John F. Kennedy had been assassinated. Immediately upon hearing, we were all desperate to know: What can happen next? Will someone now assassinate the Vice-President? Is there more to this?
On both days, what had already happened was horrifying enough. But part of the horror was that sense of suddenly being in another world where that kind of thing happened…and you had no idea if or when something else like it would follow. On both days, it took a while to accept that maybe we were back to where most things made some sort of sense.
I'm thinking about that today and also about what would happen if a tragedy of that magnitude occurred today. I think we'd still have that feeling of being lost and helpless for a time. I'd like to think we'd have at least some of that feeling of togetherness and of being one country indivisible, with partisan differences set aside. But I don't think it would last very long.
I think the President of the United States would be impeached, and for many people that would be a higher priority than tending to the dead bodies and living victims. Even if that President had snapped into action, rather than sit in a classroom and read to children…even if that President hadn't ignored certain warning signs, I think we'd immediately have hearings like the ones on Benghazi, only bigger and more of them with real, not manufactured outrage. Four Americans died in the Benghazi attack. When Americans and others were killed by attacks on U.S. eembassies during the administration of George W. Bush, no one cared. No hearings were held. No one was blamed.
I'm not saying that was right or wrong; just that that's how it was.
3000 Americans died in the 9/11 attack and perhaps another thousand have died indirectly because of that day. So instead of seven investigations like we've had over Benghazi, we'd have 7,000 over an attack the size of 9/11…and yes, I know the math is ridiculous. I'm just trying to suggest scale here. Another tragedy the size of 9/11 or even a tenth the size would be a lot worse than Benghazi, right?
I don't think 9/11 brought our country to our current level of partisanship. We were well on our way to it back when they impeached Bill Clinton.
So now we have the situation where no matter who gets elected in November, 40-49% of the country will be livid and will be hating our new president and predicting the imminent destruction of the United States of America. Some will even in a way be hoping for it so they can say "See?" to those who voted "the wrong way."
So as I sit there — in the same place where I stared aghast at the morning of 9/11, sitting in the chair I bought to replace the one I was sitting in on that day — I don't think I'm scared of another tragedy of that size and scope. Of these days, there will be one, just as there will be hurricanes and earthquakes and massive fires and plane crashes…and I just accept that as the downside of being alive. The upsides are good enough that I can live with those possibilities. We've had them before and we survive them or we don't.
What does scare me are the unprecedented disasters, the ones that don't follow any history, the kind that leave us desperate to know, "What will they do to us next?"
And then, because of the way this country has changed in the last few decades, I'm really scared of what we'll then do to each other.
Updates: Commenter JT informs me that there’s already a vote-swapping site available: MakeMineCount.org. (I particularly like their motto: “Everybody wins. Except Trump.”) I still think there’s a need for more sites, particularly ones that would interface with Facebook, but this is a great beginning. I’ve signed up for it myself.
Also, Toby Ord, a philosopher I know at Oxford, points me to a neat academic paper he wrote that analyzes vote-swapping as an example of “moral trade,” and that mentions the Porter v. Bowen decision holding vote-swapping to be legal in the US.
Also, if we find two Gary Johnson supporters in swing states willing to trade, I’ve been contacted by a fellow Austinite who’d be happy to accept the second trade.
As regular readers might know, my first appearance in the public eye (for a loose definition of “public eye”) had nothing to do with D-Wave, Gödel’s Theorem, the computational complexity of quantum gravity, Australian printer ads, or—god forbid—social justice shaming campaigns. Instead it centered on NaderTrading: the valiant but doomed effort, in the weeks leading up to the 2000 US Presidential election, to stop George W. Bush’s rise to power by encouraging Ralph Nader supporters in swing states (such as Florida) to vote for Al Gore, while pairing themselves off over the Internet with Gore supporters in safe states (such as Texas or California) who would vote for Nader on their behalf. That way, Nader’s vote share (and his chance of reaching 5% of the popular vote, which would’ve qualified him for federal funds in 2004) wouldn’t be jeopardized, but neither would Gore’s chance of winning the election.
Here’s what I thought at the time:
- The election would be razor-close (though I never could’ve guessed how close).
- Bush was a malignant doofus who would be a disaster for the US and the world (though I certainly didn’t know how—recall that, at the time, Bush was running as an isolationist).
- Many Nader supporters, including the ones who I met at Berkeley, prioritized personal virtue so completely over real-world consequences that they might actually throw the election to Bush.
NaderTrading, as proposed by law professor Jamin Raskin and others, seemed like one of the clearest ways for nerds who knew these points, but who lacked political skills, to throw themselves onto the gears of history and do something good for the world.
So, as a 19-year-old grad student, I created a website called “In Defense of NaderTrading” (archived version), which didn’t arrange vote swaps themselves—other sites did that—but which explored some of the game theory behind the concept and answered some common objections to it. (See also here.) Within days of creating the site, I’d somehow become an “expert” on the topic, and was fielding hundreds of emails as well as requests for print, radio, and TV interviews.
Alas, the one question everyone wanted to ask me was the one that I, as a CS nerd, was the least qualified to answer: is NaderTrading legal? isn’t it kind of like … buying and selling votes?
I could only reply that, to my mind, NaderTrading obviously ought to be legal, because:
- Members of Congress and state legislatures trade votes all the time.
- A private agreement between two friends to each vote for the other’s preferred candidate seems self-evidently legal, so why should it be any different if a website is involved?
- The whole point of NaderTrading is to exercise your voting power more fully—pretty much the opposite of bartering it away for private gain.
- While the election laws vary by state, the ones I read very specifically banned trading votes for tangible goods—they never even mentioned trading votes for other votes, even though they easily could’ve done so had legislators intended to ban that.
But—and here was the fatal problem—I could only address principles and arguments, rather than politics and power. I couldn’t honestly assure the people who wanted to vote-swap, or to set up vote-swapping sites, that they wouldn’t be prosecuted for it.
As it happened, the main vote-swapping site, voteswap2000.com, was shut down by California’s Republican attorney general, Bill Jones, only four days after it opened. A second vote-swapping site, votexchange.com, was never directly threatened but also ceased operations because of what happened to voteswap2000. Many legal scholars felt confident that these shutdowns wouldn’t hold up in court, but with just a few weeks until the election, there was no time to fight it.
Before it was shut down, voteswap2000 had brokered 5,041 vote-swaps, including hundreds in Florida. Had that and similar sites been allowed to continue operating, it’s entirely plausible that they would’ve changed the outcome of the election. No Iraq war, no 2008 financial meltdown: we would’ve been living in a different world. Note that, of the 100,000 Floridians who ultimately voted for Nader, we would’ve needed to convince fewer than 1% of them.
Today, we face something I didn’t expect to face in my lifetime: namely, a serious prospect of a takeover of the United States by a nativist demagogue with open contempt for democratic norms and legendarily poor impulse control. Meanwhile, there are two third-party candidates—Gary Johnson and Jill Stein—who together command 10% of the vote. A couple months ago, I’d expressed hopes that Johnson might help Hillary, by splitting the Republican vote. But it now looks clear that, on balance, not only Stein but also Johnson are helping Trump, by splitting up that part of the American vote that’s not driven by racial resentment.
So recently a friend—the philanthropist and rationalist Holden Karnofsky—posed a question to me: should we revive the vote-swapping idea from 2000? And presumably this time around, enhance the idea with 21st-century bells and whistles like mobile apps and Facebook, to make it all the easier for Johnson/Stein supporters in swing states and Hillary supporters in safe states to find each other and trade votes?
Just like so many well-meaning people back in 2000, Holden was worried about one thing: is vote-swapping against the law? If someone created a mobile vote-swapping app, could that person be thrown in jail?
At first, I had no idea: I assumed that vote-swapping simply remained in the legal Twilight Zone where it was last spotted in 2000. But then I did something radical: I looked it up. And when I did, I discovered a decade-old piece of news that changes everything.
On August 6, 2007, the Ninth Circuit Court of Appeals finally ruled on a case, Porter v. Bowen, stemming from the California attorney general’s shutdown of voteswap2000.com. Their ruling, which is worth reading in full, was unequivocal.
Vote-swapping, it said, is protected by the First Amendment, which state election laws can’t supersede. It is fundamentally different from buying or selling votes.
Yes, the decision also granted the California attorney general immunity from prosecution, on the ground that vote-swapping’s legality hadn’t yet been established in 2000—indeed it wouldn’t be, until the Ninth Circuit’s decision itself! Nevertheless, the ruling made clear that the appellants (the creators of voteswap2000 and some others) were granted the relief they sought: namely, an assurance that vote-swapping websites would be protected from state interference in the future.
Admittedly, if vote-swapping takes off again, it’s possible that the question will be re-litigated and will end up in the Supreme Court, where the Ninth Circuit’s ruling could be reversed. For now, though, let the message be shouted from the rooftops: a court has ruled. You cannot be punished for cooperating with your fellow citizens to vote strategically, or for helping others do the same.
For those of you who oppose Donald Trump and who are good at web and app development: with just two months until the election, I think the time to set up some serious vote-swapping infrastructure is right now. Let your name be etched in history, alongside those who stood up to all the vicious demagogues of the past. And let that happen without your even needing to get up from your computer chair.
I’m not, I confess, a huge fan of either Gary Johnson or Jill Stein (especially not Stein). Nevertheless, here’s my promise: on November 8, I will cast my vote in the State of Texas for Gary Johnson, if I can find at least one Johnson supporter who lives in a swing state, who I feel I can trust, and who agrees to vote for Hillary Clinton on my behalf.
If you think you’ve got what it takes to be my vote-mate, send me an email, tell me about yourself, and let’s talk! I’m not averse to some electoral polyamory—i.e., lots of Johnson supporters in swing states casting their votes for Clinton, in exchange for the world’s most famous quantum complexity blogger voting for Johnson—but I’m willing to settle for a monogamous relationship if need be.
And as for Stein? I’d probably rather subsist on tofu than vote for her, because of her support for seemingly every pseudoscience she comes across, and especially because of her endorsement of the vile campaign to boycott Israel. Even so: if Stein supporters in swing states whose sincerity I trusted offered to trade votes with me, and Johnson supporters didn’t, I would bury my scruples and vote for Stein. Right now, the need to stop the madman takes precedence over everything else.
One last thing to get out of the way. When they learn of my history with NaderTrading, people keep pointing me a website called BalancedRebellion.com, and exclaiming “look! isn’t this exactly that vote-trading thing you were talking about?”
On examination, Balanced Rebellion turns out to be the following proposal:
- A Trump supporter in a swing state pairs off with a Hillary supporter in a swing state.
- Both of them vote for Gary Johnson, thereby helping Johnson without giving an advantage to either Hillary or Trump.
So, exercise for the reader: see if you can spot the difference between this idea and the kind of vote-swapping I’m talking about. (Here’s a hint: my version helps prevent a racist lunatic from taking command of the most powerful military on earth, rather than being neutral about that outcome.)
Not surprisingly, the “balanced rebellion” is advocated by Johnson fans.
|archive - contact - sexy exciting merchandise - search - about|
|← previous||September 9th, 2016||next|
September 9th, 2016: Jughead #9 is out this week! I wrote it, I get to write Jughead comics now! You can read a preview RIGHT HERE
(CONTENT NOTE: This post discusses murder/filicide and child abuse, specifically the Austin Anderson case, and its links to systemic ableism)
Another day, another murder. Austin Anderson, aged just 19, was left in a field to die from dehydration and lack of crucial medication. By his own mother. And the media and the public are sympathising with the killer rather than the victim, because the victim was blind and autistic. (For more information I recommend this post by Grimalkin)
I saw the news on Facebook, made the mistake of reading the comments, and it felt like a punch in the stomach. How can this happen?
Why, after so many other murders of disabled people by their caregivers and the subsequent backlash by disabled adults against these ableist views, do those views – and the murders – persist?
Why are the methods of killing always so, so cruel?
Why are they sometimes called “mercy killings” in spite of this?
Why, when Anderson was crying out for help for as long as he was able, do people still jump to the horrible conclusion that, because he was disabled, he was automatically better off dead?
Why is autism in mainstream media always framed not from the point of view of an autistic person, but from the point of view of a neurotypical caregiver? (Think about it – would we let men control the feminist movement on the basis that they have daughters and other female relatives? I certainly hope not.)
Why is so little thought given to autistic people, in discussions supposedly about autism, that autistic lives are considered so disposable?
Why is the autistic person erased from the picture to such an extent that people only have sympathy for the killer, and empathising with a disabled murder victim is viewed by abled people as a lack of empathy? (Because in their eyes, the only “real” person in the situation, the only person available to be empathised with, is the abled person.)
Why is autism called a burden, an epidemic, a source of unending stress and misery, something to be eradicated, without anyone even considering that these are people they’re talking about?
Why is it that the huge stresses and strains of raising any child are (like all forms of labour traditionally ascribed to women) constantly erased and ignored, but as soon as the child is disabled, all abled people want to talk about is how all that hard work must be so stressful that literal murder is “understandable”?
Why do abled people not consider that the same ableist factors that make raising a disabled child hard make being disabled even harder? (Oh yeah, because they don’t think disabled people are people.)
Why can people simultaneously hold the views that autistic people are not allowed to engage in harmless stimming to cope with the stress of being autistic in an ableist world, and that neurotypical people are allowed to engage in literal murder to cope with somebody else dealing with being autistic in an ableist world?
Why is disability seen as a debate rather than a group of people, to the point that Facebook commenters think it’s okay to “just play devil’s advocate” when somebody died?
Why do people think being objective in this “debate” means having sympathy for that person’s killer?
Why are autistic people who object to all this so often dismissed as “high-functioning” and “not like my child”?
Why do neurotypical people want to divide us based on our ability to look and act like them?
Why do neurotypical people think autistic people aren’t “autistic enough” to have an opinion, but they can have an opinion when by definition they’re not autistic at all?
Why, when we put ourselves through debating our own humanity just to show solidarity with the victim, when we read these awful upsetting infuriating scary things about us and fight through autistic emotional overload just to show solidarity with the victim, when I had to wait until I had certain special interest material to keep myself steady enough to write this properly to show solidarity with Anderson, when our brains and an ableist society are fighting us every step of the way and we still want to show solidarity with the victim, do neurotypical people still think they can say we lack empathy?
Why do neurotypical people use perceived common traits of autism from the ableist mainstream point of view – lack of empathy, lack of theory of mind, and so on – as weapons to silence autistic people?
Why do abled people still mock the concept of ableism and attempts to reduce it? Why do abled people still think ableism is made-up?
This is ableism. Ableism kills. Ableism keeps on killing. And I’m already bracing myself for ableism killing again.
Fenris Wulf: Loki's Child (2016 edition). A witty political satire using pop music! I bet you're delighted already.
A guest post by David Gerard.
Every field has its standard ways to fuck up. Experienced artists never do these in public, but you'll see the lesser lights fall for them if you go looking. Someone gets a rush of blood and is struck by aninspiration to do something different, something the big guys aren't doing, for a new take on things! Not realising that the experienced artists don't do these things because they don't work.
Like writing a novel about the pop industry. No, better: a political allegory in the form of a novel about the pop industry. No, better still: a right-wing fever dream political allegory in the form of a novel about the pop industry. Yep, that'll show 'em all!
Phil Sandifer tweeted a "hey, look what I just stepped in":
As a connoiseur of the worst of popular culture — and novels about the music industry are definitely the worst of popular culture — I foolishly looked. (Doing a great swan dive naked into the abyss and wallowing while sending back reports probably involves staring at some point.)
The least-unreadable examples of this species of folly, that don't make you shout "WRONG! WRONG! BULLSHIT!" twice a page, tend to be thin fictionalisations of real events; plenty of fucked-up shit happens in the music industry that makes people go "someone should write a book about this." Platinum Logic by Tony Parsons fails as an even slightly coherent novel, but every lurid and tawdry incident in that book is a version of something that happened, and spotting the players is the fun part.
There are genuinely good novels that have the music industry as a theme — Pratchett's Soul Music, Banks' Espedair Street — but these tend not to be about the music or the industry as such, and avoid going into too much detail even as they slip in the in-jokes. Even The Commitments, which is literally about a band. They resist the urge to be didactic. It's a trepidatious endeavour, though: get the details even slightly wrong and you look like a fool.
Fenris Wulf gets the details mostly right. The problem is everything else.
"Hi, I'm Fenris. I've considered myself to be a LaVeyan Satanist for about 10 years, and I also embrace a Lokian version of Asatru, as my name indicates."
Loki's Child is published by Castalia House, i.e. Vox Day, the abovementioned human dumpster fire, who is now most famous for doing his damnedest to fuck up the Hugo Awards with the Rabid Puppies.
I thought Day was the author of this thing — "Fenris Wolf" was the name of his 1990s video game studio, and he was once in a band with minor hit records so he's brushed up against the business. But Vox is not known for either patience or attention to detail, and this is a work the author's been polishing and polishing for years — he put the first version up as Record Producer from Dimension X in 2005, then publicised the rewrite as Loki's Child in this 2011 post to a fan board for Ayn Rand's right-wing political philosophy Objectivism:
The novel gets progressively more ingenious, and it exposes the disgusting evil of the nihilist Left in a way that hasn't been done before. The heroes are based on various pagan gods, and the villains are based on historical movements such as the Jacobins, Luddites, Puritans, Aztecs, and others.
Well, I'm glad he enjoyed it himself. ("Aztecs"?)
Here's how he describes his own brilliance on a Ron Paul forum a year later:
It's about a group of musicians who foment an insurrection against the federal government. It creates a detailed alternate history in which America is taken over by the radical Left and collapses into dictatorship and cultural psychosis. It's simultaneously dark and hilarious, surreal and all too believable. It's incredibly inventive and contains literally hundreds of passages that will make you laugh out loud. Its viewpoint is staunchly libertarian and it upholds a strict constitutionalist approach to everything from economics to education to war.
Here's the blurb from the author's site — he's a radio station engineer who "records local bands on analog tape" — describing his own book as "A libertarian tour de force ... a savagely funny takedown of culture and politics":
Meanwhile, the Jacobin Party is wrecking the economy, dismantling the Constitution, and smuggling weapons to street gangs in order to control elections through violence. Blenderman is drawn into a conspiracy to bring down the music cartel and the State itself, orchestrated by a young woman who worships Loki, the god of chaos.
Let's have a closer look at that cover:
I honestly couldn't tell if that image was a Photoshop disaster of a render, a Photoshop disaster of what was once a photograph of an actual human female, a Photoshop disaster of an artwork by an artist who couldn't draw, or an unholy cut'n'paste of some or all of these. It turns out that's a default model from 3D graphics software Poser, with a pose that appears to be from XNALara. Which is the laziest possible solution to the problem. (Really nice boots, I'll credit. Though the broken ankle is a
standard telltale of anatomically incompetent modeling bit of a worry. But full points to her being good enough to fingerpick an electric, which I'm sure was the artist's intent.)
The artist is RGUS (I looked through his DeviantArt gallery and kept shouting "BOOBS DON'T DO THAT" — there's a reason artists take life drawing classes), whose Poser/XNALara Castalia has used elsewhere. To his credit, when Day attempted to spam him into the 2016 Hugos he declined the nomination.
(Don't go looking for "Poser art" without SafeSearch on.)
Here's the original 2011 self-published edition cover, which is a photo of a guitar put through an oil-painting filter and eyes added in ah homage to a rather more famous book cover. The 2016 cover is certainly more striking. And fully up to Sad Puppies cover quality requirements, of course.
The intro asserts the sound engineering and production sk1llz of the hero,
Mixerman Blenderman, and his dab hand with ProTools SonoViz®. It ends with:
That is, this is a fictionalised crib of The Daily Adventures of Mixerman by Eric Sarafin. An amazing tale, purportedly real-life (and ringing fairly true), about Sarafin recording some well-funded bidding-war nobodies. I read it when it was originally being posted to ProSoundWeb in 2002. Every music industry sufferer will delight in it and you should read it.
Presumably this is an attempt at using your human "humor" (not "humour"). Reading this, I hear a three-second sample of canned laughter on loop with a mismatched splice, forever.
So we've started our novel about the pop industry with a rehash of an actually interesting insider tale and some hilarious parody. Fair enough, that's a standard approach.
Then he starts mixing in his opinions on women.
Feminazi bitches, amirite?
I totally describe myself to myself every time I see myself in a mirror, and I'm sure you do too.
The heels bit is a "wait, what?" He thinks these women look twenty-four, but he has them wearing heels that they literally haven't learnt to walk in yet. Let me tell you manly blokes a secret: it's not that hard to walk in heels. You walk on tiptoe and slightly move your hips in time. Keep your shoulders up, don't hunch. It takes hours to learn, not years. Including drunk. Of those women who wear heels at all, the only ones who go out, typically to see bands, in heels they can't walk in yet are literally teenagers.
"Clown paint" is an allusion to anarcho-capitalist (yeah, those two words can actually go together) cult leader Stefan Molyneux's famous admonition "stop making yourself look like fucking sex clowns to milk money out of men's dicks", his most renowned contribution to valuable Men's Rights Activist discourse on how women use makeup to oppress men.
Nobody talks like any of this, and that sort of hideous four noun pileup is the exclusive domain of people like me.
Here are the narratorial descriptions of these silly little girls who are so cutely play-acting at thinking they're a band:
Note that these detailed inventories of assets are taken in real time during a conversation.
Foolish human female, interrupting my important Brownian exposition to pretend to know as much about the thing you're actually here to do as a man would have picked up watching television and scratching his balls.
yes thanks that's great Fenris thank you
yes thank you Fenris
We're now into a chapter narrated by Scotty, Blenderman's trusty assistant, but it's good to see that he's completely — some might say indistinguishably — in tune with his boss's views on the filthy distaff of the species. Glad to see those fucking sex clowns won't be milking any money from your dick. Just like they didn't manage to in high school. (Or try to.)
This is clearly a cut and paste from an actual session. The Scotty chapters are about the didactic technobabble (and, of course, making it clear that those silly human females can't be expected to understand this stuff). The character's job is to expound Mr Wulf's views on recording — how much nicer analogue tape is than
ProTools SonoViz®, LOUDNESS WARS, why he can't stand listening to MP3s. But job-related pub anecdotes don't in fact make good fiction writing.
This starts a passage defaming Mutt Lange and Shania Twain. But what I'm wondering is if "pet veal" is an allusion to Piers Anthony's "In The Barn".
This starts probably the first actually-amusing scene in the book ...
... but we can't be expected to notice it's funny unless the characters supply a written laugh track!!
Chapters 7, 8 and 9 are the band recording. Tedious riffing on Mixerman, and lots of another standard pop novel mistake: page-length slabs of purported lyrics. These are bad enough when an author's trying to write good fictional lines, but much worse when they're just writing a strawman to demolish. Never do this.
You fucking tool, Fenris. Steve Albini would rip your head off and shit down your neck.
The extended Mixerman crib is in fact the good part. The book then takes a right turn (Castalia books do not turn left) into political polemic — the victory of the anti-Social Justice Warrior warriors courtesy the singer, who is either virtually or literally Satan, and her musicians fomenting a Galtian anti-government rebellion, some anti-Islam ranting, lots of cribs from Ayn Rand and right-wing conspiracist nutcases — all the ideas you hope the meme-spouting Trump fans on your Facebook feed are only joking about, though you fear deep down that they seriously mean them.
Even an otherwise-positive Amazon reviewer notes: "In Part 2 and Part 3 the pillow of political ranting slowly suffocates the story, the characters, and the laugh out loud vibe of Part 1." The earlier versions attracted similar complaints. It's a pity that's the intended purpose of the book as far as the author is concerned.
I could continue dissecting it in horrified detail, but as Phil puts it:
On the other hand, it did momentarily make #2 in Amazon's "satire" category! So that's something. (Even if you can make #1 in an Amazon subcategory with three dollars and five minutes.) Also it's technically alternate history fantasy, so doubtless a hot favourite for next year's Rabid Puppies slate.
Everything about this book is written in crayon. Read Mixerman instead, it's vastly superior and contains every good idea in this book and none of the bad.
There's a reason there isn't a genre of novels about the pop industry — just a scattering of survivors and a burning garbage heap of cautionary examples.
David Gerard is an embittered superannuated music journalist. He normally writes this sort of thing for Rocknerd, publishing all the fits that's news since 2001. If you liked this piece, feel free to grab him a coffee. David gratefully acknowledges the vital assistance of observant goons in the composition of this review.
Hi my name is Fenris Blen'derman Teddybeale Wulf Rand and I have long fluffy black hair (that's how I got my name) with Objectivist streaks and helpful tips that reaches my mid-back and icy rational eyes like limpid tears and a lot of people tell me I look like Anton LaVey (AN: if u don't know who he is get da hell out of here!). I'm not related to Ayn Rand but I wish I was because she's a major fucking hottie. I'm a sound engineer but my teeth are straight and white. I have pale white skin. I'm also a Lokian Asatruer, and I go to a magic school called Castalia in Finland where I'm in the seventh year (I'm seventeen). I'm a Libertarian (in case you couldn't tell) and I wear mostly black. I love the Ron Paul forums and I buy all my ideas from there. For example today I was wearing dark angst with matching ennui around it and a black leather attitude, grey world-weariness and black combat boots. I was wearing no makeup, none of that clown paint. I was walking outside Mom's basement. It was snowing and raining so there was no sun, which I was very happy about. A lot of SJWs stared at me. I put up my middle finger at them.
then he put his Mises into my you-know-Rothbard and we did it for the first time
Chris Bieniek wants to know about a cartoon show from my youth…
I have a simple question: Why can't I purchase a complete, unedited collection of the 1966 Marvel Super Heroes TV show on DVD or Blu-Ray? I know you think it's awful, but I'm sure there are a lot of people who would be interested, and I've never seen any kind of informed discussion about why it hasn't happened yet. If you don't know, would you be kind enough to hazard a guess?
Well, I don't think it's exactly awful. Most of the stories and drawings were taken from the comic books — without, of course, paying an extra nickel to the guys who did that work for low comic book rates. A lot of the material was so strong that even the cheapest-possible animation and voice work couldn't render it unentertaining…and I kinda like some of the theme songs.
Why isn't it out for home video? Well, this is somewhere between a guess and a real answer: At least twice, folks who were attempting to assemble an actual, non-bootleg release contacted me to ask if I had any idea where they could find negatives or better copies than they had…because they simply didn't have good enough source material, especially of the opening titles and closing credits. I dunno if the masters were lost or destroyed or what — but at that point, they just didn't have prints that didn't look like they'd been taped off Channel 9 onto Betamax cassettes. I was of no use to them.
Has anyone since found good copies of everything? If they haven't, that's probably your reason. If they have, there's probably no one at Disney who thinks the material would generate enough interest. Generally speaking, when something is not out on home video, one or more of five reasons apply…
- There's a rights dispute over who owns the material or controls the home video rights. That's what held up the Adam West Batman show for some time. Twentieth-Century Fox (which produced the series) said if anyone was going to put those out on DVD, said it would be them. Time-Warner (which owns the characters) said it would be them. It took a while to negotiate an arrangement.
- There's music in the shows or films that would be very expensive to clear and so the material might not be cost-effective to release. This is the case with some of the early Hanna-Barbera cartoons. Once in a while also, someone else had a contract, written before anyone envisioned a home video market, that causes complications. Companies got worried about that after Disney put out Lady and the Tramp on VHS and singer Peggy Lee, who worked on the film in several capacities, pointed out that her old contract from 1955 didn't allow for that. A jury awarded her a few million bucks that the studio wasn't cheerful about paying.
- They would release it but they can't find copies of all the material…or copies that measure up to the necessary video standard. A lot of old shows simply do not exist or exist in such bad condition that expensive restoration work would be necessary and that reconstruction might not be cost effective. At one point, I believe it looked like I'm Dickens, He's Fenster would never be out of DVD for that reason but someone finally took the chance.
- Someone in a position of power just doesn't think there would be enough customers for the material in question to justify the investment. I believe the Walt Disney Treasures DVDs came to an end because the sales caused some at the company to believe there just plain wasn't an audience for certain of the less well-known Disney films and shows.
- They just haven't gotten around to it yet. This is less and less a reason as time goes by but years ago, there were a lot of angry animation fans who couldn't understand why all their favorite Hanna-Barbera or Warner Brothers cartoons couldn't all come out on home video at once. The company had determined, rightly or wrongly, that the market could only handle X number of releases at a time and so they wanted to space them out.
In some cases, more than one of these reasons can apply and at times, changes in management (or desperation for new product) has prompted the issuance of something on DVD that previously seemed like it would never be released that way. Also of course, it happens that rights problems get cleared up or someone in the warehouse stumbles across old negatives or tapes they didn't know they had.
In the case of the Marvel Super Heroes cartoons, it's probably Reason #4 but it might also be Reason #3 as well. Maybe someday, neither will apply.
Got a question you want me to answer on this blog?
Send it here. No politics, no personal replies...
and tell me if you want me to leave your name out of it.
Or, more to the point today, a good woman. Turns out it’s quite easy, in fact: all you need is a phone or an email account, and a certain kind of craven cowardice.
Quoting Sisyphus, whom I introduced in my previous post:
Hello again, Peter.
I enjoyed your blog post, though thank goodness I didn’t suggest reading it in any way with my class. As it turns out, I am no Sisyphus, and before I even began to teach the novel, one parent had written an email, and another called the principal (neither spoke to me) both outraged at the idea of teaching a novel which had at one point contained such language. I told my administrator, who is a completely reasonable man, by the way, to call off the dogs. If it was this big an issue before we’d read a single redacted page, it was going to become a catastrophe. I will continue to teach “Ambassador” in the future. And as for the kids who began reading the novel on their own, they were quite disappointed and asked if they might still be able to discuss the novel with you over Skype at some point.
Thanks for even considering this. It’s unfortunate how things turned out; in the words of Kurt Vonnegut: so it goes.
So it is not enough to be a good teacher. It is not enough to be a challenging teacher. It is not even enough to be an accommodating teacher, one so dedicated that she sought me out and enlisted my support for an act we both regard as downright odious— but were willing to commit if it meant that students could be exposed to new ideas and new ways of thinking. It is not enough to hold your nose and slash the prose and spread your cheeks in an attempt to appease these ranting, rabid Dunning-Kruger incarnations made flesh. They will not be appeased.
It is not enough to gut a book of its naughty bits. That the book ever had such bits in the first place is offense enough.
We do not know the names of those who complained; they struck out bravely under cover of anonymity. I do know the name of the school at which this travesty went down, but if I spoke it here the teacher would be fired. I find it curious that those so full of self-righteous fury, so utterly convinced of their own virtue, would be so averse to the spotlight. Are they not doing God’s will? Should they not be proud of their handiwork?
Strangely, though, these people don’t like to be seen.
In the end, it probably doesn’t matter. It’s not as though this is an isolated case, after all; it hails from the heart of a country where more adults believe in angels than accept evolution, a country where— in the race to rule a hemisphere— an orange demagogue with zero impulse control is once again even in the polls with a corporate shill who revels in the endorsements of war criminals. The problem is not one outraged parent, or one school, or one county. The problem is the whole fucking country. The problem is people.
Naming names in one specific case— even if that did do more good than harm— would be like scraping off a single scab and hoping you’d cured smallpox.
But there she is, doing her goddamned best in the center of that shitstorm: Sisyphus, and all those like her. Today she lost the battle, but I know her kind.
The war goes on.
In a time when barely an hour passes without something interesting happening in British politics, some people might have missed that Jeremy Corbyn’s position on the UK remaining in the single market appears to have got a little muddy this afternoon:
Labour source repeatedly refuses to say that Jeremy Corbyn wants Britain to remain a member of the single market
— Emily Ashton (@elashton) September 7, 2016
Now, this might all be a flash in the pan – though attempts to clarify Corbyn’s position don’t seem to be helping – but it feels potentially important for the future of the Labour party.
With my usual caveat that almost every prediction of a party split comes to nothing, membership of the single market feels to me like the issue that could act as a key division in a Labour split. If Corbyn wants to try a push a position of supporting the UK leaving the single market, remaining in it is a key issue (with a huge amount of current salience) that unites a big portion of the Parliamentary Labour Party from the right to the soft left. The divisions over the single market aren’t just in Labour either – Downing Street has already had to correct the Government’s own Brexit minister over his position on it.
If Corbyn won’t defend the single market, the thinking might go, there’s a huge space available for an opposition that will. It’s an issue that can create links across parties (such as to the SNP, the remaining Tory pro-Europeans and the Liberal Democrats) and also generate support from outside the parties. There are a lot of large businesses that would lose a lot if Britain loses membership of the single market (the Japanese are just the first to make this clear and public), and if such a split needed the funding and structure to become a party of its own, that would be a very important factor.
Now, this might just be a subject of interest for an afternoon and Corbyn might close it down by declaring his unequivocal support for the single market at his next press conference (‘I’m delighted to have the support of 63% of the people who worked on Bonekickers‘) but it’s clear that the UK’s relationship with the EU is going to be the fundamental issue in British politics for the next few years. If Corbyn is going to shift his public position on that to one not shared by the bulk of the PLP, it could be the trigger for the final breaking of ties.
Ignore Polly Toynbee by all means, but what lessons should the Lib Dems learn from the Coalition years?
Polly Toynbee has an article in today's Guardian whose headline tells you all you need to know:
Why I can’t forgive Nick Clegg and his party of useful idiots
Those of us who remember Polly Toynbee from the SDP - and even from David Owen's Continuing SDP - find it hard to take her entirely seriously in Tribune-of-the-People mode. We Liberals called them "the Soggies" for a reason.
And there is a dishonesty at the heart of her argument. When she writes:
The Lib Dems swallowed the story that the country needed a boiling down of every function of the state to its bare bones. They were useful idiots for what was always an ideological projectshe ignores the fact that Labour fought the 2010 general election promising spending cuts that would be "tougher and deeper" than those implemented by Margaret Thatcher.
In other words, most of the cuts made by the Coalition would have been made by a Labour government too.
But I don't suppose you would make yourself popular with Guardian readers if you reminded them of that.
Even if we can set Toynbee's article to one side, we Liberal Democrats do need to decide the lessons we should learn from the Coalition years. Because I liked seeing us in power and I want to see it again.
So let me suggest three lessons - no doubt there are many others.
First we need to be more politically astute. Even if we are in coalition with another party, its members are not our friends and do not wish to see us prosper.
And I think Nick Clegg now recognises this. As he said in Saturday's major interview with Simon Hattenstone: "I did not cater for the Tories' brazen ruthlessness."
Second, we need a distinct Liberal Democrat approach to economics. One of the problems with the Coalition was that we had four considerable economists - Cable, Huhne, Laws and Webb - on our front bench, yet we ended up with Danny Alexander at the Treasury.
David Laws might have had the intellectual heft to challenge George Osborne (whether he would have wanted to is a separate), but with Danny as chief secretary that was never likely to happen.
We fell too easily into saying that Labour had "overspent on its credit card" - or rather, we said that but had little interesting to add to it.
Third, we need a clearer idea of who the voters we want to appeal to are. The problem with imposing tuition fees was not just that we broke a pledge we should never have signed: it was that we let down the group that should be part of the core vote for a Liberal party: the educated young.
David Howarth's thoughts on this - and the lessons of coalition in general - are worth studying.
One thing I would say in Nick Clegg's defence is that these problems - a certain naivety about power; a lack of economic identity; a failure to decide who we are trying to appeal to - existed in the Liberal Democrats long before he joined.
(CONTENT NOTE: This is basically an unedited list of panics about heatwaves, so if that stuff happens to bother you too then proceed with caution, and if you’re claustrophobic it turns out there’s a lot of overlap!)
I’m not really sure if this is an autistic thing or not, but recently I’ve found that when certain Big Scary Things happen, I can remain fairly calm and in control at the relevant time only to make myself anxious by ruminating on the situation after it’s over. I think I find these thoughts more difficult to keep a lid on than the at-the-time thoughts because my usual thought-balancing mantras don’t really apply – I already know it’s over, I already know I’m safe (because it’s over), I already know I can deal with it (because I just did) so what else am I supposed to say back to my anxious brain? The two main situations that come to mind for this habit are when my ex tries to contact me again (which hasn’t happened in months now) and the one I’m going to talk about today – yep, regular readers please feel free to roll your eyes, this is another heatwave post! (If you’re new to the blog and/or the heat thing, here’s a quick summary of why heatwaves are overloading and terrifying and The Worst).
Last week, I decided to made a note of all the post-August-heatwave thoughts I had, couldn’t shake, and couldn’t really express much elsewhere, and then post it here with as little editing as I could, no matter how silly and self-conscious I felt (which is a lot, by the way…), in the vague hope that other (probably also autistic) people would “get it”. Weirdly, just doing this exercise has actually helped a lot; the act of filing away a thought with the promise it will be “dealt with” later seems to convince my brain it doesn’t need to do any more work on it, so I’ll probably write more of these lists in future, whether I post them or not! So without further ado, here’s my unfiltered autistic brain, fresh from dealing with its biggest and silliest fear and randomly throwing it back at me every so often:
That happened. That happened. I know it happened, it’s over, and I should move on, but I don’t know how, I don’t know what to move on to. That happened. And it’s going to happen again.
Here come the autumn posts. I haven’t posted anything like that yet because… I don’t know, I just don’t feel comfortable. I guess this is what they mean by “masking”. That, and it just never entered my head to do so. Will they think I was just faking or exaggerating the posts I made when I was panicking? What about the heatwave the other week, when I just couldn’t articulate the thoughts I might have wanted to express – are they suspicious that I didn’t really acknowledge it?
“It’s been another belter of a day-” NOPE. “Too warm for me-“ NOPE. Fanning yourself – DEFINITELY NOT. Why? Why do I panic and freeze up and freak out at people thinking exactly what I’m thinking, at the people most likely to be sympathetic? I *initiate* these conversations all the time, why don’t I like other people doing it?
It’s September. This shouldn’t be an issue.
It was two weeks ago. It’s over. This shouldn’t be an issue.
Run the following scenarios: Stuck in a lift. Locked in a car. Generic fictitious heatwave scenario. Google things. Regret it immediately.
“We haven’t had a very good summer-“ Haven’t we? HAVEN’T WE? Later, I reason that most people probably care more about sunshine than heat, and maybe there haven’t been as many hours of sunshine, even though the sunshine we’ve had has been so warm. I’m such a mess.
I JUST SLAYED A METAPHORICAL ARMY OF ZOMBIES AND NO ONE NOTICED.
“Hottest day of the year, and we decide to go into an unventilated basement, hahaha-“ NOPE. Pause the interview. Breathe. You can do this. It’s just an offhand comment, skip the next 30 seconds or so and they’ll have moved on. In hindsight, I’m fine, they’re fine, everything’s fine – in a way I find it funny, because special interest, you had one job! But it’s so scary, and so fucking pathetic, that my brain can just *do* that. How do I balance my thoughts when the only thought is fleeting wordless terror?
I feel guilty for the rain. People are wet and miserable and I wanted it. At the same time, I kinda resent that misery – I want to snap “it’s not THAT bad, we’re ALL wet, you’d be moaning if it was sunny too” and see how they like it. But two wrongs don’t make a right!
WHY AM I LOOKING UP OLD POSTS I KNOW WILL MAKE ME FEEL TOO AWFUL TO READ THEM PROPERLY. WHY AM I DOING THIS. WHY.
The eternal balance of trying to appear calm enough that people don’t think you’re ~weird~ and draw undue unhelpful attention to it, but not so calm that they don’t take your anxiety seriously. Like everything else. Disabled enough but not too disabled. I don’t think it’s possible. I think it’s a trap.
5th September, and I’m still seeing scary heatwave articles shared in my news feed. It’s probably nothing though, right? Certainly nothing compared to what we’ve had, at least. Still, I don’t know how to properly react.
Have I actually got to do another sixty of these???????? I wonder if I’ll eventually just get over it. I must do eventually, surely. At least I hope so…
First saw this about a year ago, but always worth a reread
Beyond the more obviously ideological axes we arrange politicians and parties along (left-right, authoritarian-libertarian etc) I think there’s also an axis on a scale I’d refer to as managerial-transformative. (Another name would be conservative-radical, but I’ve tried to go for something more neutral, and less confusing, as we shall see)
Managerial politics are based on improving things as they currently are through processes of gradual reform. It’s not a blind acceptance of a status quo, with no desire to change it, but more a belief that surface level reform of a situation is enough to make it work better. It presents itself to the public as a vision of competence – the idea behind ‘valence politics‘ – saying that the basic system is fine, it just needs to be run better than it is now.
Transformative politics, on the other hand. say that the system needs to be radically altered in order to achieve anything. This could be because the system was designed badly in the first place, or has just become unsuited to present times and conditions. Transformative politics are about bringing in a whole new way of doing things, not just making small changes to the old system. It presents itself to the public as a change and a break in the existing order of things.
It’s worth noting that these are an axis, not two alternatives. Politicians and ideas can tend to one side or the other and have different opinions on different subjects, though there is a general tendency in which side people present themselves as being overall.
Until historically recently, British politics had followed a rough pattern of alternation between the two poles. Managerialists would run the system until the problems within it became too much for it to continue, at which point power would be won by the transformatives, who would bring in a raft of changes to the system in order to renew and refresh it, be it the Great Reform Act or the NHS. After a while, though, they’d run out of things to transform – or start transforming things that didn’t need it – and they’d lose power to those who would now come in on a promise to manage the new system better than they did (in some cases, these would be the same people who’d managed the old system, but had since accepted the change and were happy to manage it).
The problems we face now stem from this system starting to break down in the 1960s and 70s. Up to that point, the Tories (and their ancestors) had generally been the party of managerialism, while Labour (and before them, the Liberals) had been the party of transformation (in this case, bringing in ‘the white hear of technology’). However, the Tories of the 70s, instead of promising to manage Labour’s changes better than they could adopted transformative ideas of their own – Heath’s ‘Selsdon Man’ and then Thatcherism. This led to the confusion of 70s politics, with Wilson and then Callaghan trying to sort out the mess they inherited, rather than pushing transformative ideas of their own. This then led to the full switch of Thatcher’s government bringing in big changes to the system, followed by Major’s attempts to maintain them and finally Blair being elected. Blair represents both both the end result of the switch that began thirty years before – a managerialist claim to be able to run the changed system better than its creators – and a switch back to the old system, promising to make radical changes to the system. Is the failure of New Labour down to people thinking they were getting something transformative, and instead ending up with something managerial?
The problem we have now is that just about every party now contains a mix of managerialists and transformatives. They can sit in similar positions on the conventional political scales, but are radically opposed on the managerialist-transformative one. However, because our political system is still built on the idea that there’ll be a steady oscillation between the two poles of that axis, things have started going wrong on a more frequent basis. In some areas, new ideas get piled on top of new ideas, with no time between them for them to be managed and allowed to bed in, while others remain stuck in the same mindset they’ve had for decades or more, no one willing to break away from the managerialist consensus.
So, that’s the rough shape of my idea – is it worth exploring further, utterly pointless, or have I just reinvented a wheel that someone else had already explained with much more detail and accuracy?
NOTE: ZS allow parallel downloads
Click on pix for the details of the bands involved
Someone named Sally writes…
Why are there so many producers on a TV show or movie? Sometimes, there are seven or eight or more.
Well, the first thing you need to know about the title "producer" is that it, in its various permutations, is just about the only title of any importance that can be bestowed on anyone. The Writers Guild has strict rules on what someone must contribute in order to get a "Written by" credit. The Directors Guild controls director credits. But if your company is doing a TV show, you can make your three-year-old son a producer on it.
So sometimes people get it for ceremonial reasons…like they were involved in the deal that sold the show. Or they're a biggie in the production company. Since it doesn't cost anything, the title is sometimes given out in negotiations. You ask for $25,000 to write a TV show. They counter by offering you $18,500 and a producer credit.
You can not only negotiate that, you can negotiate to be Executive Producer, Producer, Co-Producer, Supervising Producer, Creative Producer, Associate Producer, etc. There are no fixed definitions of any of those but obviously, some suggest that they're higher ranked than others.
Also, there's this: When I was doing the original Garfield and Friends show, my credit was originally "Written by," which was all I wanted. I didn't even want to be credited as Voice Director. Then one year, we were nominated for an Emmy for Best Animated Series and one of our Executive Producers, Lee Mendelsohn, realized something. According to the Emmy rules then, a Best Show Emmy went only to the producer(s) of an animated series. Lee felt it would be a shame if the show won and I didn't get a statuette so beginning with the following season, he added my name on as Co-Producer.
Nothing else changed. Just that. We never won, by the way. Those Emmy rules have since been changed and I believe now, someone who writes a certain percentage of the episodes qualifies for an Emmy if the cartoon show wins Best Series. But there are other situations where folks fight for producer credits because the way the rules are configured, if the show gets an Emmy, they don't. Unless they have a producer credit.
Lately, a lot of folks who in earlier days might have been credited as Story Editors or Script Consultants now ask for and get producer credits. Some stars want them. A manager who once wanted to represent me as a writer told me that if I signed with him, he would get 15% of everything I was paid but he would also demand an Executive Producer credit on any show or movie I wrote. If they wouldn't give it to him, we wouldn't take the deal. I did not sign with this person.
Long ago on a TV show, you could easily pinpoint which of the names in the credits was the person who had the main creative say. It was the man or woman designated as producer. Now, everyone's a producer so they refer to the person with the main creative say as the "showrunner," a title which I don't think ever appears on-screen.
What I'm getting at is that you shouldn't take producer credits too seriously. One might mean something or it might not. I did a show once with two Executive Producers. One had day-to-day involvement making important decisions…though not as much as the guy credited as Supervising Producer. The other Executive Producer was the agent who made the initial deal with the network to do the show. I wrote on that show for three years, never met that Executive Producer and almost never heard his name mentioned. He may not even have watched the program.
Got a question you want me to answer on this blog?
Send it here. No politics, no personal replies...
and tell me if you want me to leave your name out of it.
(Blogging continues to be sparse because, although I just sent in a final draft of "The Delirium Brief", I'm hard at work on other projects—notably my 2018 space opera, "Ghost Engine", and my 2018 Merchant Princes universe novel, "Dark State"—and taking time off to attend a birthday party in Berlin.)
The trouble with writing fiction is that, as a famous novelist once said, reality is under no compulsion to make sense or be plausible. Those of us who make stuff up are constantly under threat of having our best fictional creations one-upped by the implausibility of real events. I'm pretty much resigned to this happening, especially with the Laundry Files stories: at least space opera and fantasy aren't as prone to being derailed as fiction set in the near-present.
But there's a subtle corollary to the impossibility of story-telling keeping up with reality, and that's the point that it is also pretty much impossible to invent protagonists who can keep up with reality.
Let's face it, most people lead lives which are, to all outward appearances, pretty boring. They're not boring if they're you, but major life milestones (graduation from school/university, your first job, your wedding day, birth of a child, death of a parent) can be encalsulated in a single parenthesized list because they're so ubiquitous that most of us have some experience of them. The hyperfocussed realism of much literary fiction is simply an introspective examination of the minutest details of such ordinary lives, and while a good writer can make the ubiquitous or the mundane somehow spellbinding, those of us who are used to the spicier diet of genre fiction tend to need some additional seasoning. For example: take the embarrassing family dinner where the nest-flown kids return to introduce their significant others to the generation gapped parents. Many or most of us have lived through that experience, but it's if you try to put it in a work of SF and run it for a chapter or two you will lose most of your readers—starting with your editor—unless you reach for the hot sauce. (For example: throw in all four of the youngsters having separate coming-out experiences over the dinner table, with a parental meltdown for light relief. Been there, did that in "The Nightmare Stacks").
Generally genre readers prefer, if not two-fisted action heroes, then at least people whose lives are less uninteresting than their (our) own. So we try to invent interesting protagonists, people thrust outside our own comfort zone who nevertheless are equipped to deal with the slings and arrows and ancient curses of a different reality.
But reality is always going to one-up you because it's under no requirement to make sense.
Let us take, for example, a fellow called Ignaz Trebitsch-Lincoln (Wikipedia biography here), who was (variously) a Jewish, Presbyterian, Buddhist, spy, British MP, Nazi, propagandist, and would-be Balkan oil cartel mogul. Oh, I forgot to mention: claimed reincarnation of the Dalai Lama and Japanese-backed candidate for the Emperor of China. (Not bad for a poor shtetl boy who started out as a Hungarian orthodox Jewish yeshiva student.) Nothing about this man makes any sense whatsoever unless he's a character from a movie script written by Thomas Pynchon for Woody Allen.
Ignatius was born to an Orthodox Jewish family in Paks in Hungary in 1879, but after a brief student career as an actor (with a side-line in petty theft) he fled Hungary for London, fell in with Lutheran missionaries, and converted to Christianity. He joined a seminary, got in trouble, and was sent to Canada to evangelize the Jews of Montreal. Whereupon he decided Anglicanism was more to his taste, had a falling out with the mission, and decamped for Britain. Talking himself into a position as a curate in the Church of England he contrived to get himself elected to Parliament briefly in 1910. (He was unseated at a second general election later the same year.) He was less lucky in business but somehow managed to combine being a British MP with attempting to establish a monopoly on the Balkan oil fields. The outbreak of war saw him back in London and, when the British rejected his services as a spy, he promptly made contact with the Germans, who had no problem employing him as a double agent ... somewhat unwisely, as this all became material for his kiss-and-tell book Revelations of an International Spy, published in New York in 1916.
No. Just no. Not making this up.
High points of what happened after he was released from his prison sentence for fraud in Parkhurst Prison after the war? Well, his supernatural charisma failed him at one point: Adolf Hitler was not terribly impressed when they met in 1920. even though Trebitsch-Lincoln reputedly saved Hitler's life in the wake of the failed Kapp Putsch (whose Minister for Information Trebitsch-Lincoln briefly was, making him the only former British MP to serve in a German government). Drifting from one right-wing rabidly anti-semitic group to another (and serially betraying them to the highest bidder) he finally ran out of friends in Europe and fled east. In China he initially worked as an arms smuggler for various warlords before converting to Buddhism, rising to the rank of abbot, and establishing his own monastery, where initiates were required to hand over all their possessions to the abbot (who spent his spare time seducing nuns). He seems to have contracted a strong hatred for the British government along the way, which possibly motivated his transfer of allegiance to the Japanese empire in China ... or perhaps this was simply a diplomatic move intended to secure Japanese and Nazi backing for his bid to take over Tibet by proclaiming himself Dalai Lama. (Himmler was apparently an enthusiast.)
Circumstances surrounding his death in Shanghai in 1943 are unclear, but he is known to have written a letter to Hitler protesting the holocaust; the response—a Nazi diplomatic request that the Japanese authorities poison him—probably led to his death due to "stomach trouble".
We can't know for sure at this remove, but Trebitsch-Lincoln certainly displayed all three of the dark triad personality traits. The records don't suggest that he was physically violent (although his relationships with women were exploitative at best and almost certainly psychologically abusive), but he had an alarming ability to talk himself into anyone's good books. If the Kapp putsch had been successful he might well have gone on to be a sort of proto-Goebbels for an early Fascist post-war regime. If he'd been slightly more successful in obtaining backing from the Gestapo in the far east he might have had the necessary backing to proclaim himself Emperor of China.
And if he'd survived past 1945 I am absolutely certain that Ian Fleming would have drafted him in as the role model for a Bond villain.
This was going to be a bumper-pack of implausible larger-than-life characters from history, but I sort of overran my target. If you want some homework, though, you could do a lot worse than read up on Julie d'Aubigny, Mademoiselle La Maupin (1673-1707), cross-dressing swordswoman, opera diva, lethal duelist and seducer of nuns (and briefly mistress of Maximillian II Emanuel, Elector of Bavaria).
As wikipedia notes, dead-pan, "due to Mademoiselle de Maupin's beautiful voice, her acting skill, and her androgynous appearance, she became quite popular with the audience, although her relationship with her fellow actors and actresses was sometimes tempestuous ... Her Paris career was interrupted around 1695, when she kissed a young woman at a society ball and was challenged to duels by three different noblemen. She beat them all, but fell afoul of the king's law that forbade duels in Paris" (so she fled to Brussels and waited for the fuss to die down while having an affair with a foreign head of state).
Or, as Badass of the Week puts it, "Julie D'Aubigny was a 17th-century bisexual French opera singer and fencing master who killed or wounded at least ten men in life-or-death duels, performed nightly shows on the biggest and most highly-respected opera stage in the world, and once took the Holy Orders just so that she could sneak into a convent and bang a nun. If nothing in that sentence at least marginally interests you, I have no idea why you're visiting this website." Nothing particularly unusual here: just another 17th century bisexual Annie Lennox clone and opera star with a side-line in sword-fighting.
Two serious points for any fiction writer emerge from this meditation on eccentricity.
Firstly, any accurate depiction of mundane real-world life has to take into account the fact that reality contains multitudes, including outrageously and larger-than-life figures like La Maupin and Trebitsch-Lincoln. You can write hyperrealistic literary character studies of protagonists who are utterly barkingly implausible except insofar as they are based on real people; or you can write escapist genre fantasies about utterly plausible normal people thrust outside their comfort zone (a vampire! Except he just happens to be a low-level banking IT dogsbody turned civil servant). What you can't do is one-up reality, because reality has a bottomless magic wallet full of colourful surreal excess.
Secondly, if one wishes to add spice to a work of escapist SF or fantasy, sometimes we can do better by looting the historical archives than by trying to roll our own characters. La Maupin would work perfectly as a foil for the protagonist of a secondary world fantasy yarn (set in I-can't-believe-it's-not 17th century France, with added magic), or perhaps even as the protagonist herself. Trebitsch-Lincoln is of course the Bond Villain Who Got Away (because Ian Fleming forgot to write about him), a Bizzaro-world hybrid of Doctor No and Ernst Stavro Blofeld (and, on reflection, it's possible that Fleming did know of him; it has been several decades since I read the original novel of "You Only Live Twice", but Trebitsch-Lincoln's eastern self-reinvention may well have informed Fleming's depiction of Blofeld in Japan). But if we employ characters like this, we have to dial back on the weirdness of the setting, lest the dish come out excessively spiced to the point of implausibility. Better, I think, to dump the protagonists of a literary novel out of their comfort zone in the deep end of a space opera, than to try to write La Maupin in orbit.
So: who are your favourite barkingly implausible historic characters—not currently alive, please, that would be tasteless—and how would you deploy them in fiction? (Be sure to leave not only a name but a link to some biographical colour, and to explain your fictional reasoning.)
Amidst the fun (for certain values of the word ‘fun’, anyway) of this summer’s Labour leadership contest, there’s a regularly repeated assumption that the result of it will lead to the party splitting. As the re-election of Jeremy Corbyn appears likelier and likelier, so does the volume of people anticipating the bulk of the Parliamentary Labour Party splitting off to form their own parliamentary grouping and/or party.
Party splits in British politics are much more predicted than they ever actually occur. Sure, there’s the odd defection between parties (though even those are rare at a Parliamentary level), but there have been many more instances of people being absolutely certain that a party is going to split than actual instances of parties splitting.
There are two main reasons for this. First is the fact that even when people within a party believe it should split, the tendency is to project that desire onto the people you disagree with. No one wants to give up the power of the party’s existing name, assets and structures, so we get the situation we have now with supporters of Labour’s leaders demanding that the Blairites form their own party or go and join the Tories, while their opponents tell the bloody Trots to sod off back to the SWP. Both sides see themselves as the defenders of the tradition of the Labour Party and the others as betraying it, and both believe the others should leave so they can have their party back.
This brings us to the second reason, and the question of why these two strands of the left coexist in a single party in the first place. Most European countries have two separate parties on the left – a social democratic party, and a further left socialist/communist party. There are some elements of this in British politics with various parties vying to fill the gap to the left of Labour, but the British left parties are much smaller than their European counterparts. Most of them have had continuous (and often sizeable) parliamentary representation which hasn’t been the case in the UK since the Communists lost their last MPs after WW2.
The left that exists as separate parties in other countries has been subsumed within the wider Labour Party because the British electoral system rewards larger ‘catch-all’ parties and punishes smaller parties. Separate left parties can thrive in systems based on proportional representation, and even the French two-round system allows for the Communists and diverse left to exist separately from the Socialists, but in Britain they are forced to stay together for fear of the electoral consequences (perhaps best demonstrated in the SDP-Labour split of the 1980s).
Unless you’re entirely confident you can reduce the other side into an insignificant rump, then a party split is close to mutually assured destruction, scuppering the electoral chances of both sides. There have been plenty of times when the left could have split off to form their own party over the years, but none of them were taken because the political system would have made them even more disastrous for those involved than the SDP. If the Labour Party could exist as two (or more) separate parties, then they would have formed naturally by now rather than trying to cohabit in the same organisation. If we had a different electoral system, things might be different, as splitting wouldn’t be such a destructive mood. Maybe this is something both sides can now blame Tony Blair for, because if he’d delivered on his promise of PR after 1997, the party might not be in the mess it now is.