tag:blogger.com,1999:blog-35985144510847573122023-11-15T11:43:52.128-05:00Principia ConcordiaRatio est concordiamAnonymoushttp://www.blogger.com/profile/11223701982593441719noreply@blogger.comBlogger29125tag:blogger.com,1999:blog-3598514451084757312.post-43706594527609148482015-11-03T23:18:00.001-05:002015-11-03T23:18:53.510-05:00Windows 10's "Notify to schedule restart" option is evilHappy Patch Tuesday, everyone. This is actually my first Patch Tuesday on Windows 10, and I'm learning some interesting things about how it works in 10.<br />
<br />
By default, Windows Update will reboot the computer automatically when it's not in use. That's fine for most people, but since my computer is essentially a gaming rig, I don't want to trust Microsoft on the whole "when it's not in use" thing. I have occasionally seen other software vendors screw this up, and I'd rather choose reboot times myself. I rarely leave the computer on for more than 12 hours on end, so there is usually no need to reboot the computer anyway. I opted for "notify me to schedule a restart," which seems sensible enough, right?<br />
<br />
Wrong. You see, when Windows Update "notifies" you, it does this by minimizing whatever you have open and popping up a system-wide modal dialog.<br />
<b></b><br />
Yes, that's right. A <b>system-wide modal dialog</b> (like a UAC prompt, so you can't click away from it). Which steals focus and wasn't initiated by user action.<br />
<br />
Seriously, guys? That was the best UX design you could possibly come up with?<br />
<br />
OK, let's review some basic UX rules that I <i>thought</i> Microsoft had down cold, but apparently they don't. First, you <b>do not steal focus</b>. Ever. Whatever the user is doing is always more important than whatever you want to show them. Microsoft <a href="http://blogs.msdn.com/b/oldnewthing/archive/2009/02/20/9435239.aspx" target="_blank">applies this rule to other apps</a>, but apparently can't be bothered to follow it themselves. Second, <b>dialogs are not modal</b>. To be more specific, dialogs should rarely be app-modal and (almost) never be system-modal. UAC breaks this rule, but there are legitimate security reasons for it (if it were not system-modal, another app could steal focus or interact inappropriately with the dialog). For Windows Update, there are no such reasons, other than the perennial "Windows Update must be as annoying as humanly possible" design aesthetic that Microsoft seems to go for with each new version of Windows. Thirdly, <b>dialogs are initiated by user actions</b>. Dialogs do not randomly appear when the user is in the middle of something, if at all avoidable. For that use case, we have toasts, balloon tips, etc., which are all far less annoying.<br />
<br />
It is 2015. We should not be having this conversation, Microsoft.Anonymoushttp://www.blogger.com/profile/11223701982593441719noreply@blogger.com0tag:blogger.com,1999:blog-3598514451084757312.post-19812538753129297442015-09-18T21:16:00.000-04:002015-10-18T15:49:34.911-04:00Mercurial koansA young acolyte approached Master Hg. "Master Hg, what is the nature of Mercurial Branches?"<br />
<br />
Master Hg replied, "Branches are markers attached to every commit, indelible and eternal."<br />
<br />
The acolyte nodded. "Master, if we use branches for tracking new features in our product, how are we to avoid namespace pollution?"<br />
<br />
Master Hg regarded the acolyte coolly. "Mu. Branches are lines of development, forgettable and ephemeral."<br />
<br />
At once, the acolyte was enlightened.<br />
<hr />
A student was working under Master Hg. "Master, yesterday I discovered a seven-headed hydra in our history."<br />
<br />
Master Hg nodded, saying nothing.<br />
<br />
"I did not wish to fight the beast, but I noticed it resided entirely on a separate branch. So I closed the branch, and thought it dealt with. But today I looked again and saw the hydra still lived. Why was the branch not closed?"<br />
<br />
"The branch was indeed closed," replied Master Hg, "leaving six branches open."<br />
<br />
<hr />
<i>Ed. note: This koan is obsolescent and has no successor.</i> <br />
<br />
One day, a traveler from a faraway land sought Master Hg's guidance. "Master Hg, I wish to alter history."<br />
<br />
Master Hg nodded, smiling warmly. "What you seek is easily attainable. History is supple and easily rewritten."<br />
<br />
Excitedly, the traveler began researching in Master Hg's veritable library and shared his work with others. After many days, he returned. "Master, when I tried to share my changes with my friends, as I have done in my homeland, the DAG became extremely confused and I had to re-clone the server. Why does Mercurial not work correctly?"<br />
<br />
Anger flashed across Master Hg's face. "What you seek is impossible. History is unyielding and changing it the domain of the gods."<br />
<br />
"But mere days ago," the traveler protested, "you told me otherwise."<br />
<br />
"I find it curious you remember events which did not occur," replied Master Hg.<br />
<br />
The traveler stormed out angrily. It was many hours before enlightenment struck him.Anonymoushttp://www.blogger.com/profile/11223701982593441719noreply@blogger.com0tag:blogger.com,1999:blog-3598514451084757312.post-20515476706194212582015-06-13T23:54:00.000-04:002015-06-13T23:55:30.068-04:00Tim Hunt: Not a martyrAs with most incidents relating to the feminism vs. "men's rights" debate, I had planned on quietly ignoring the Tim Hunt issue. Like previous events (donglegate, elevatorgate, etc.), this one is a fairly straightforward issue (Should Tim Hunt have said those things? No, of course not) with a lot of stupid internet drama on the side (though since I left reddit, I have seen a lot less of that), and I prefer not to contribute to the latter.<br />
<br />
But then I read a <a href="https://reason.com/archives/2015/06/13/the-illiberal-persecution-of-tim-hunt" target="_blank">fascinatingly unhinged article</a> in <i>Reason</i>. I got about halfway through before I decided I wanted to blog about it. The basic premise seems to be that Hunt is being oppressed in some fashion. The truly bizarre part, however, is who's (allegedly) doing the oppression.<br />
<a name='more'></a><br />
<i>Reason</i>, as you may or may not already know, is a libertarian magazine. They favor maximal individual liberty. Yet the people they are denouncing are private individuals exercising their rights to freedom of speech and freedom of association. It's simply baffling. It's as if the author wants Mr. Hunt to enjoy full civil liberties, but not his audience.<br />
<br />
Let's go over freedom of speech. Here's our cherished First Amendment (which, incidentally, does not apply to the UK, but American freedom of speech is quite strong compared to the rest of the world, and serves as a useful example in this case):<br />
<blockquote class="tr_bq">
<b>Congress</b> shall make no law respecting an establishment of religion, or
prohibiting the free exercise thereof; or abridging the freedom of
speech, or of the press; or the right of the people peaceably to
assemble, and to petition the Government for a redress of grievances.<a href="https://en.wikipedia.org/wiki/First_Amendment_to_the_United_States_Constitution#cite_note-1"></a> </blockquote>
(bold added, bizarre semicolon usage in original)<br />
<br />
The 14th Amendment later extended this to cover state governments as well. But that's it. If the person acting is not a government entity, freedom of speech doesn't even come into play. If other people mock you because you said something stupid, you have no redress. None whatsoever. This is not a bug, it's a feature. Otherwise, the marketplace of ideas would be stifled by protectionist lawsuits and other such barriers.<br />
<br />
So this is standard libertarian freedom-of-speech-is-a-<a href="https://en.wikipedia.org/wiki/Negative_and_positive_rights" target="_blank">positive-right</a> nonsense. May as well analyze it while we're here. We might even learn something.<br />
<br />
In the beginning of the piece, the author is careful to note his disagreement with Hunt's comments:<br />
<blockquote>
Hunt's crime was to make a <b>not-very-funny</b> gag during an after-dinner
speech at a conference on women in science in South Korea earlier this
week.</blockquote>
<blockquote class="tr_bq">
In a normal world, a world which valued the freedom to <b>make a doofus of
oneself</b>, that should have been the end of it. Seventy-two-year-old man
of science makes <b>outdated</b> joke, tumbleweed rolls by, The End.</blockquote>
Yet later, he uses rather different terminology:<br />
<blockquote>
The response to Hunt is way more archaic than what Hunt said. Sure, his
views might be a bit pre-women's lib, pre-1960s. But the tormenting and
sacking of people for what they think and say is <i>pre-modern</i>. It's positively <b>Inquisitorial</b>.</blockquote>
<blockquote class="tr_bq">
The irony is too much to handle: Hunt is railed against for expressing
an old-fashioned view, yet the railers against him do something
infinitely more old-fashioned: they expel from public life someone they
judge to have committed <b>heresy</b>. Kick him out. Strip him of his titles.
Mock his misfortune. "Savour the moment." How awfully ironic that the
Royal Society, which played a key role in <a href="https://royalsociety.org/about-us/history/">propelling Britain from medievalism to modernity</a>, is now being asked to behave in a <b>medieval</b> fashion and send into the academic wilderness a <b>heretic</b> among its number. </blockquote>
<blockquote class="tr_bq">
[snip]<br />
<br />
Too often today we're told that gangs of crazy students or irate
feminists, invading armies of pinkos, are turning otherwise enlightened
universities into hotbeds of PC intolerance. That's way too simple. In
truth, universities themselves, having embraced relativism,
non-judgmentalism, and discomfort with the idea of <b>Truth</b> itself, incite
such behaviour. They green-light it. They facilitate it. The Hunt story
confirms that the academy isn't being destroyed by morally alien beings,
by cushioned, entitled youth—it is destroying itself.</blockquote>
(bold added, capitalization and italics (rendered as <a href="https://en.wikipedia.org/wiki/Roman_type" target="_blank">roman</a>) in original) <br />
<br />
The author's choice to associate feminism with the Inquisition is, I think, unfortunate. We usually think of the Inquisition as scientifically illiterate, suppressing truth in favor of religion. In this analogy, that casts Hunt's comments in the role of truth, reinforced by the use of that word in the final paragraph. Perhaps I'm reading too much into it; the author did say he disagreed with Hunt. But he does clearly say this:<br />
<blockquote>
The Hunt incident is quite terrifying. For what we have here is a
university, under pressure from an intolerant mob, judging a professor's
fitness for office by his personal thoughts, his idea of humour. Profs should be judged by one thing alone: their depth of knowledge. It
shouldn't matter one iota if they are sexist, stupid, unfunny,
religious, uncouth, ugly, or whatever. All that should matter is whether
they have the brainpower to do the job at hand.</blockquote>
I must respectfully disagree. Scientists do not live in little boxes producing research isolated from the rest of the world. Science is a collaborative process. If a professor is sexist, racist, or "whatever," that directly interferes with his or her job. I don't pretend to know whether Hunt's comments rise to the level of a dismissable offense; that's his employer's job. But an offense they were.<br />
<br />
What's truly sad is that the article misses a critical opportunity. Hunt's former employer was UCL, which I believe is government funded. Arguably, then, Hunt's firing is government action in response to speech. I'm not sure I agree with this line of reasoning, but I feel it is on stronger footing than any of the arguments the author raises.Anonymoushttp://www.blogger.com/profile/11223701982593441719noreply@blogger.com0tag:blogger.com,1999:blog-3598514451084757312.post-78795479999008024402015-04-03T12:02:00.000-04:002015-04-03T12:02:23.913-04:00Why I don't use hg-flowI recently had a chance to read <a href="http://nvie.com/posts/a-successful-git-branching-model/">A successful Git branching model</a> (the branching model underlying the popular git-flow and hg-flow extensions), and I found it rather interesting. For NBTParse, I've been following <a href="https://bitbucket.org/NYKevin/nbtparse/wiki/Branching%20strategy">a modified form</a> of Mercurial's <a href="http://mercurial.selenic.com/wiki/StandardBranching">standard branching</a>. I thought about trying to adapt my work to use hg-flow, but I realized the differences are largely cosmetic:<br />
<ul>
<li>My main development descends from the @ <a href="http://stevelosh.com/blog/2009/08/a-guide-to-branching-in-mercurial/">bookmark</a>, just like the "develop" branch of Driessen's model. Conveniently, Mercurial automatically updates to this bookmark when cloning, if there's no obvious target revision.</li>
<li>Although I rarely bother with them, feature branches are easily supported as bookmarked alternate heads of the default named branch. I may use them more often once NBTParse approaches stability and it becomes necessary to keep the trunk stable(ish) leading up to a beta release.</li>
<li>I use release branches, much like Driessen. Mine are named branches instead of bookmarks, but this is mostly a matter of the former not existing under Git. The release branches also have bookmarks, which are reused from one branch to the next; this makes it easy to (automatically) find the current unstable release branch (it's just <tt>release-unstable</tt>), for example.</li>
<li>Much like feature branches, hotfix branches are just bookmarked alternate heads of release branches. Again, I rarely bother with them, since my release branches remain open for as long as the released product is supported. However, they can be useful if a fix is likely to require multiple commits or the attention of multiple developers.</li>
<li>Now we come to the "master" branch. I must admit, I don't quite have a master branch, but I have the next best thing. All my releases are tagged, and the latest unstable (and stable, once we hit stable) is bookmarked. I can just do <tt>hg log -r 'tag("re:version-.*")'</tt> to find everything that would have been on the master branch if I had one. If I only want stable releases, I can use a more precise regex (e.g. <tt>^version-\d+\.\d+\.\d+$</tt>). Oh, and those <a href="http://www.selenic.com/hg/help/revsets">revsets</a> work in Bitbucket's search interface, too.</li>
</ul>
I'm sure Driessen's model is extremely helpful for him and lots of other developers, and it's nice to see people looking at nontraditional branching strategies. But I guess I just don't see much value in rearranging my commits like this; my revision DAG is complicated enough already. <br />
<ul>
</ul>
Anonymoushttp://www.blogger.com/profile/11223701982593441719noreply@blogger.com0tag:blogger.com,1999:blog-3598514451084757312.post-85211086916522714892015-02-20T15:46:00.001-05:002015-02-20T15:46:07.042-05:00NBTParseFor quite some time, now, I've been working on a Python library called <a href="https://bitbucket.org/NYKevin/nbtparse">NBTParse</a>. It's designed to manipulate Minecraft worlds. It's been languishing in a not-ready-for-prime-time state for the longest time, but I've finally decided to start working towards a 1.0 release. As a motivator for that, I've begun rapid releases of 0.x.0 versions.<br />
<br />
Each new 0.x.0 release will happen on the first Thursday of the month. 0.2.0 dropped on the 5th, but I didn't think to blog about it until now. It can be had from PyPI under the name <code>nbtparse</code> (that is, you can run <code>pip install nbtparse</code> as usual). If you don't have a C compiler, you may have difficulty with this.<br />
<br />
Let's talk about the current status of the library. It's currently very rough, with a number of serious shortcomings. The biggest is the lack of a lighting engine. In short, it doesn't adjust lighting information after altering the terrain. Because of this, terrain manipulation is highly risky at the moment, with great potential to corrupt saves. In other words, <i>back up your saves</i> before playing with this. But you knew that already; it's listed as pre-alpha in PyPI, after all. Work on the lighting engine has not seriously begun yet, so this will likely remain the case for a long time. On the other hand, I hope to at least have heightmap updating in time for 0.3.0 (this will hopefully prevent any corruption from actually crashing Minecraft, though you'll still experience lighting errors).<br />
<br />
Another issue is Minecraft 1.8 compatibility. In short, there is none. While 1.8-isms are nominally accessible, they're not particularly fun to work with right now. Fortunately, this is mostly a matter of sitting down and writing declarative code to describe the new features. It should be fairly straightforward once I get around to actually doing it. I hope to have this in 0.3.0.<br />
<br />
Finally, the API is not yet stable. Some aspects of the library may change as we move closer to 1.0, without backwards compatibility.Anonymoushttp://www.blogger.com/profile/11223701982593441719noreply@blogger.com0tag:blogger.com,1999:blog-3598514451084757312.post-37012830386486374232014-10-19T20:01:00.000-04:002014-10-20T23:24:35.858-04:00Meta-Compatibilism<dl class="dialogue">
<dt>Salviati</dt>
<dd>A few days ago, I bought a $5 desk toy on Amazon. Now that it's here, I'm regretting the purchase. It's not nearly as good as I expected. Oh well, it's not that much money.</dd>
<dt>Simplicio</dt>
<dd>Do you ever wonder if you could have done things differently?</dd>
<dt>Salviati</dt>
<dd>What do you mean?</dd>
<dt>Simplicio</dt>
<dd>Well, take that toy, for example. Could you have decided not to buy it?</dd>
<dt>Salviati</dt>
<dd>Well, if I had known I wouldn't like it, of course I wouldn't have bought it.</dd>
<dt>Simplicio</dt>
<dd>That's not what I meant. Could you have chosen differently if you hadn't known?</dd>
<dt>Salviati</dt>
<dd>What are you talking about? I <i>didn't</i> know and I did buy it. What more is there to discuss?</dd>
<dt>Simplicio</dt>
<dd>Were you making a real choice, or just following the laws of physics?</dd>
<dt>Salviati</dt>
<dd>What is a "real" choice?</dd>
<dt>Simplicio</dt>
<dd>A choice which could have come out differently.</dd>
<dt>Salviati</dt>
<dd>If I had decided by flipping a coin, then it could have come out differently. But that doesn't really strike me as much of a choice.</dd>
<dt>Simplicio</dt>
<dd>No, the coin's motion is determined by chaotic air currents and subtle physical factors. There isn't any real randomness there.</dd>
<dt>Salviati</dt>
<dd>OK, what if I use a Geiger counter hooked up to a radioactive mineral? Unless you subscribe to a hidden variable theory (which I do not), the counter's clicking is truly random. No matter how much you know about the situation, you can never predict exactly when it will go off.</dd>
<dt>Simplicio</dt>
<dd>We're missing the point. I don't want to know whether a Geiger counter could have made a different choice. I want to know whether <em>you</em> could have made a different choice.</dd>
<dt>Salviati</dt>
<dd>Either some part of my brain acts like a miniature Geiger counter, or no such part exists. If it does exist, then yes, I could have "chosen" differently, but I don't think you'll want to count that either. If it doesn't, then obviously not.</dd>
<dt>Simplicio</dt>
<dd>So you admit it! You did not make a real choice, because real choices don't exist. We're all slaves to physics.</dd>
<dt>Salviati</dt>
<dd>Nonsense. I wanted it, and purchased it because I felt like it. My action directly resulted from my desire. No one forced me to buy it. Physics isn't a person, actively controlling my life. You still haven't given me a definition of "real choice."</dd>
<dt>Simplicio</dt>
<dd>A real choice is a choice you could have made differently, under the same circumstances, under your own conscious direction.
<dt>Salviati</dt>
<dd>But why would I want to? At the time, I thought it was a grand idea. In that state of mind, why wouldn't I buy it?</dd>
<dt>Simplicio</dt>
<dd>That's irrelevant. The question is whether you could have chosen differently, not whether you actually would have.</dd>
<dt>Salviati</dt>
<dd>What's the difference between "I could have chosen differently, but never would have," and "I could not have chosen differently?" Those sound like the same thing to me.</dd>
<dt>Simplicio</dt>
<dd>They're not. "Could" is physics. "Would" is choice.</dd>
<dt>Salviati</dt>
<dd>But you just said choice doesn't exist. Besides, how can choice be independent of physics? Unless you want to start talking about an immortal soul or something...</dd>
<dt>Simplicio</dt>
<dd>It doesn't matter. A choice-making soul would have the same logical problems as a choice-making brain. And this quibbling is pointless. Choice, as I've defined it, doesn't exist. We both agree on that much, I think.</dd>
<dt>Salviati</dt>
<dd>It doesn't exist because it is ill-defined.</dd>
<dt>Simplicio</dt>
<dd>No, it doesn't exist because it fails to refer.</dd>
<dt>Salviati</dt>
<dd>The desk toy was just an example. Setting it aside for now, all of your arguments have been <i>a priori</i>, that is, logical reasoning divorced from empirical evidence. If you demonstrate by logic that something can't exist, it must be logically inconsistent. You've argued that your own definition of choice is logically inconsistent. Your position is the same as mine.</dd></dl>
<br/>
The above writing format is a blatant ripoff of Galileo. I find it rather convenient, but it is admittedly unoriginal.
Anonymoushttp://www.blogger.com/profile/11223701982593441719noreply@blogger.com0tag:blogger.com,1999:blog-3598514451084757312.post-62643109049601501032014-09-23T21:21:00.000-04:002014-09-23T21:21:04.284-04:00Watch Your Back, GitChangeset evolution is a big deal. But nobody seems to be talking about it. Well, except for this guy:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<iframe allowfullscreen='allowfullscreen' webkitallowfullscreen='webkitallowfullscreen' mozallowfullscreen='mozallowfullscreen' width='320' height='266' src='https://www.youtube.com/embed/4OlDm3akbqg?feature=player_embedded' frameborder='0'></iframe></div>
<br />
But even he says it's a small set of incremental improvements. This is not small. But it is all a little abstract right now. Let's write a use case.<br />
<a name='more'></a><br />
Suppose we have a small group of developers (say, <a href="https://en.wikipedia.org/wiki/Alice_and_Bob">Alice, Bob</a>, Carol, and David). They want to do some work on a repository, but don't want to share that work with the public (if you're a closed-source shop, substitute "rest of the team" for "public") until it's finished. If it helps your imagination, you may assume the work is a fix for a security vulnerability that hasn't gone public yet.<br />
<br />
The obvious first step is to clone the public ("company-wide") repo and set up a private repo. Then they can all push and pull to the private repo. Individual team members may also want to set up their own clones to avoid accidentally pushing to the wrong repo, but that's simple enough.<br />
<br />
Because this is a fairly serious security issue, our developers are working fast and pushing often. The resulting history is a bit of a mess. Alice wants to do an interactive rebase to tidy things up. But it turns out Git actually makes this pretty difficult. To be fair, so does Mercurial, in its stable incarnation.<br />
<br />
She can do the rebase locally, but then the only way to get it onto the server is <tt>git push --force</tt>. And that's rarely a good idea. If she does so, Bob, Carol, and David will need to do their own history fixups. That process can easily eat up a whole afternoon if a developer hasn't pushed in a while, the rebase is extensive, or both. We did say they were pushing often, but maybe they don't all follow the same workflow.<br />
<br />
More importantly, Bob, Carol, and David will need to <i>know</i> they need to do history fixups, or they'll just merge the old history back in and ruin all of Alice's careful work. Of course, developers should be communicating with each other regularly. Developers should also be writing unit tests and documentation. Nothing is ever perfect.<br />
<br />
Enter changeset evolution. Under Mercurial, Alice could perform the equivalent of her rebase with a series of simple commands (like <tt>prune</tt>, <tt>fold</tt>, and <tt>reorder</tt>). She can then push it normally. The other three developers will get a somewhat messy history out of this, but they can just do <tt>hg evolve</tt> to clean it up (semi)automatically (they can still fix it manually if they really want to). In particular, Mercurial will prevent pushing from a messed up history.<br />
<br />
Historically, Mercurial has resisted history rewriting, preferring to mark things as unwanted and quietly forget about them (cf. <tt>hg ci --close-branch</tt>, which just marks a head as closed). Then MQ became popular. MQ is a built-in extension for managing a stack (or "queue," in the LIFO sense) of patches. It can commit and uncommit them to local history, and thereby provides basic history editing. Lately, however, the Mercurial developers have <a href="http://gregoryszorc.com/blog/2014/06/23/please-stop-using-mq/">expressed dissatisfaction with MQ</a>, in favor of several newer tools. In particular, we have <tt>hg histedit</tt> as a built-in extension. That's roughly the equivalent of an interactive rebase. We also have <tt>hg rebase</tt> for non-interactive rebase. <tt>hg strip</tt> can be used to drop a changeset and its descendents with no further ado. And of course, you can always do <tt>hg ci --amend</tt> to quickly fix the parent of the working directory (i.e. <tt>HEAD</tt>, in Git parlance, or <tt>.</tt> in hg revsets).<br />
<br />
All this history modification is nice, but it's somewhat risky. If you modify public history, it's easy to make a fine mess. So Mercurial tracks whether a revision has been pushed to a public server yet with so-called "phases." Changesets in the "public" phase are immutable, though you can manually force them back into the "draft" phase if necessary. But Mercurial also allows some servers to be flagged as non-publishing, which means pushing to them doesn't count.<br />
<br />
Right now, working with non-publishing servers is unnecessarily cumbersome. You see, Mercurial does not discard "unreachable" changesets the way Git does. So an <tt>hg push --force</tt> just creates a new remote head. It's basically a detached <tt>HEAD</tt> except that it isn't eligible for garbage collection (which I believe doesn't even exist under Mercurial). On the other hand, Mercurial makes it easy to find these heads with the <tt>hg heads</tt> command. To fix this issue, you need to manually strip the old head server-side, and then do so again on everyone else's local copy, possibly with additional history fixups. Alternatively, you can manually close the head with <tt>hg ci --close-branch</tt>, but all that really does is hide the head from <tt>hg heads</tt>; the history still appears in <tt>hg log</tt> and the revision DAG.<br />
<br />
Changeset evolution resolves this issue by returning to the append-only history model. Pruning a changeset does not delete it; it simply flags it obsolete. If it lacks non-obsolete descendents, it is hidden from the repository history. The same obsolescence marker is used for all of the other new history rewrites. These markers are pushable and pullable, though Mercurial will try to avoid pulling or pushing obsolete changesets unless absolutely necessary or the user manually requests it. These markers are how <tt>hg evolve</tt> knows what to do.<br />
<br />
When changeset evolution hits stable, Mercurial will have a significant advantage over Git in terms of history rewriting.Anonymoushttp://www.blogger.com/profile/11223701982593441719noreply@blogger.com0tag:blogger.com,1999:blog-3598514451084757312.post-23538105502752211302014-09-19T19:11:00.000-04:002014-09-19T19:11:18.477-04:00The Pendulum and the WinchI often find that explaining computer science to non-computer-scientists is difficult. It's been said that computer science is like no other field of study. Well, I think that's a rather strong claim to make. What follows is a translation of a standard problem in computer science into physics. It is an analogy, unrealistic but nevertheless interesting.<br />
<br />
I have a pendulum, supported by some apparatus ultimately connected to a pillar or pole. It is possible to move the apparatus up or down, but only by manually detaching and reattaching it by hand. I have a winch affixed to this apparatus which may raise or lower the pendulum. It is connected to a coil of rope or string (which, <a href="https://en.wikipedia.org/wiki/Spherical_cow">for the purposes of this problem</a>, is infinitely long yet magically takes up a finite volume), and can be remotely controlled at the press of a button. The winch is also geared discretely; it only turns in units, and then only one at a time.<br />
<br />
I wish to carry out a series of experiments involving varying the length of my pendulum. In particular, I often want to lengthen the pendulum. Most of the time, this setup suits me quite well. But sometimes, I find I need a pendulum longer than the apparatus is high off the ground. In these situations, I need to climb the pillar and move the winch up. In so doing, I may need to start an entire experiment over again because the pendulum lost energy while I was climbing. How can I avoid or minimize those climbs in proportion to the maximum length of the pendulum? We must assume I do not know the maximum length in advance, perhaps because my experiments are highly complex and difficult to predict, or perhaps because they are directed by someone else's instructions, and they did not think to tell me in advance how long a pendulum I would need.<br />
<br />
<a name='more'></a>The obvious answer is to fix the winch at the top of the pillar. But we may suppose the pillar is quite tall. In fact, it is extraordinarily tall, to the point that "pillar" is no longer an adequate term. Instead, it is essentially a <a href="https://en.wikipedia.org/wiki/Space_elevator">space elevator</a>. The logistical problems with this should be apparent, even if we suppose I also have a crane or cherry picker of equivalent (and grossly unrealistic) height.<br />
<br />
We have encountered another constraint: we must minimize the height of the pendulum above the ground. If the pendulum is too high, it will be more difficult to work with. This is not a hard and harsh requirement; we may say that it is acceptable for the pendulum to get quite far off the ground, so long as its height remains reasonable in proportion to the maximum length of the pendulum.<br />
<br />
The commonly accepted solution (on the other side of the analogy) is to double the height of the winch every time the pendulum touches the ground. If we only lengthen, and never shorten, the pendulum will never be farther off the ground than it is long. As we lengthen, the number of times we raise the winch is proportional to the binary logarithm of the pendulum's length, a nice and small number. Of course, if we shorten the pendulum again, these metrics look worse, but we specified we only cared about them in proportion to the <i>maximum</i> length of the pendulum, not its current length. <br />
<br />
Actually, the preferred solution multiplies the height of the winch by some smaller factor instead of doubling, because this works better in practice. Different solutions use different factors, and there is some disagreement over the ideal value here. But mathematically, 2 is an inferior choice.<br />
<br />
If we say the <i>cost</i> of moving the winch is proportional to the current height of the winch, we may say that on average, lengthening the pendulum has a constant cost. This means that on average, we expend the same amount of effort every time we lengthen the pendulum by a unit. We can see this as follows: Every time we move the winch, we incur a cost proportional to the current length of the pendulum. The previous winch-moving cost half as much as the current one, and so on. By summing all of the preceding costs, we will arrive at a figure proportional to at most <a href="https://en.wikipedia.org/wiki/1/2_%2B_1/4_%2B_1/8_%2B_1/16_%2B_%E2%8B%AF">the height of the winch</a>, which is the same as the cost we're about to pay. That means the total paid is also proportional to the height of the winch, which is equal to the length of the pendulum (the pendulum is on the ground right now, since we're moving the winch), and so we see the total paid is proportional to the total lengthened.<br />
<br />
As some of you may have guessed, the analogy is to a <a href="https://en.wikipedia.org/wiki/Dynamic_array">dynamic array</a>, known to C++ developers as a vector. The length of the pendulum is the size of the array, while the height of the winch is its capacity. The height of the pendulum above the ground is the wasted memory which has been allocated but not consumed. Lengthening is appending and shortening is popping. Moving the winch is a reallocation. The preceding paragraph is <a href="https://en.wikipedia.org/wiki/Amortized_analysis">amortized analysis</a>. Under that analysis, we may also think of the height of the pendulum as the <a href="https://en.wikipedia.org/wiki/Potential_method">potential function</a>, which meshes quite nicely with our analogy.Anonymoushttp://www.blogger.com/profile/11223701982593441719noreply@blogger.com0tag:blogger.com,1999:blog-3598514451084757312.post-44019450687939610282014-08-27T20:34:00.000-04:002014-08-27T20:56:01.865-04:00The Great Picard TheoremThe <a href="http://en.wikipedia.org/wiki/Picard_theorem">great Picard theorem</a> states that, if an analytic function contains an essential singularity, then within any punctured neighborhood of the singularity, the function takes on every value with at most a single exception.<br />
<br />
<a name='more'></a>OK, that probably looks like complete gibberish. Let's break it down.<br />
<br />
First of all, what are we even talking about? The phrase "analytic function" is a hint, and "essential singularity" is a dead giveaway. These terms are used when discussing complex functions, that is, functions whose domains are the complex plane (or some subset thereof, but usually not a subset of the reals). Just as we can define functions that operate on real numbers, we may also define functions on complex numbers.<br />
<br />
For example, consider this function:<br />
<br />
<div style="text-align: center;">
<i>f</i>(<i>z</i>) = <i>z</i><sup>2</sup>+1</div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
By convention, we use <i>z</i> instead of <i>x</i> when dealing with<i> </i>complex functions. Otherwise, this is a standard polynomial, and we can just substitute complex numbers into it:<br />
<br />
<div style="text-align: center;">
<i>f</i>(<i>i</i>) = <i>i</i><sup>2</sup>+1</div>
<div style="text-align: center;">
<i>f</i>(<i>i</i>) = 0</div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
This particular polynomial is apropos because it is central to the definition of complex numbers. <i>i</i> and −<i>i</i> are defined as the roots of this polynomial, and the rest of the complex plane is then derived from that. With a little more work, you can show that <a href="http://en.wikipedia.org/wiki/Fundamental_theorem_of_algebra">every <i>n</i>th degree polynomial has <i>n</i> complex roots</a> (though they may not all be distinct).<br />
<br />
So now that we have a basic grasp of complex functions, what is an "analytic" function? The precise definition of <a href="http://en.wikipedia.org/wiki/Holomorphic_function">analytic function</a> is a function which is everywhere equal to its <a href="http://en.wikipedia.org/wiki/Taylor_series">Taylor series</a>, but once you do some math it can be shown that the analytic functions are all and only those functions whose derivatives exist at every point in their respective domains (more formally, <a href="http://en.wikipedia.org/wiki/Analyticity_of_holomorphic_functions">holomorphic functions are analytic</a>). An "entire" function is an analytic function whose domain is exactly the complex plane (as opposed to the complex plane minus some points). The above polynomial is entire because its derivative exists everywhere (equivalently, it is its own Taylor series, which converges everywhere).<br />
<br />
If we have some more complicated function, like the sine function or the exponential function, we can <a href="http://en.wikipedia.org/wiki/Analytic_continuation">analytically continue</a> it by taking its Taylor series. Assuming the original function is sufficiently well-behaved on (some interval of) the real number line, the continuation will be analytic. This is true of the exponential function, sine and cosine, and a number of other functions, as well as some variations on them such as the <a href="http://en.wikipedia.org/wiki/Error_function">error function</a>, but not the <a href="http://en.wikipedia.org/wiki/Hyperbolic_function">hyperbolic sine and cosine</a>. For a more interesting example, the <a href="http://en.wikipedia.org/wiki/Riemann_zeta_function">Riemann zeta function</a>'s analytic continuation is far more interesting and useful than the original infinite sum definition, which diverges for negative numbers.<br />
<br />
Once you know that a complex function is analytic, you can immediately deduce a number of beautiful properties. I'm not going to cover all of them here, but I do recommend researching this for yourself.<br />
<br />
The next stumbling block in the theorem is "essential singularity." A singularity is a point absent from a function's domain. An isolated singularity is a singularity which is not "near" any other singularities. This means there exists a circle with nonzero diameter centered on this singularity which does not contain any other singularities. More simply, the singularity is a point by itself, and not part of an entire singular line or region.<br />
<br />
Isolated singularities can be further broken down into three types:<br />
<ul>
<li>Removable singularities: If the function took on a particular value at this point (instead of failing to exist), it would still be analytic. Intuitively, the function behaves as if its value ought to be a particular value, but for some reason it is instead undefined.</li>
<li>Poles: Loosely, the function approaches <a href="http://en.wikipedia.org/wiki/Point_at_infinity">unsigned infinity</a>, like 1/<i>z</i> does. Intuitively, we would like to say the function actually <i>is</i> infinite at this point. We can use the <a href="http://en.wikipedia.org/wiki/Riemann_sphere">Riemann sphere</a> to formalize this notion, and we end up with a so-called <a href="http://en.wikipedia.org/wiki/Meromorphic_function">meromorphic function</a>, assuming its singularities are all poles.</li>
<li>Essential singularities: All others. Intuitively, there is no value, not even complex infinity, which we may assign without losing analyticity.</li>
</ul>
Essential singularities can be thought of as pathological cases, where a function behaves in a particularly bizarre or irregular fashion. Of course, even the pathological cases must follow some rules, and the great Picard is one of them.<br />
<br />
Now we need to divert into topology to define "punctured neighborhood." Well, actually, we don't need most of the baggage topology gives this term, so we can just define a punctured neighborhood as a disk centered on a given point, but with that point removed. It's usually implied the disk's radius is on the small side, and the theorem we're using specifically applies to <i>any</i> punctured neighborhood, no matter how small.<br />
<br />
We can finally pull it all together. Take a function with an essential singularity, say, this one:<br />
<br />
<div style="text-align: center;">
<i>f</i>(<i>z</i>) = exp(1/<i>z</i>)</div>
<div style="text-align: center;">
<br /></div>
<div style="text-align: left;">
The function fails to exist at <i>z</i> = 0. What's more, if you take the limit approaching from the left (more rigorously: approaching <i>along the real number line</i> from the left), you get zero, but if you approach from the right you get infinity. Pick any positive real number <i>ε</i>. Considering only the points within <i>ε</i> of the origin, the function takes on every complex value infinitely many times, except that it is never equal to zero. Here's a picture, courtesy of <a href="http://commons.wikimedia.org/wiki/File:Essential_singularity.png">Functor Salad at Wikimedia Commons</a>:</div>
<div style="text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://upload.wikimedia.org/wikipedia/commons/0/0b/Essential_singularity.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="https://upload.wikimedia.org/wikipedia/commons/0/0b/Essential_singularity.png" width="320" /></a></div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
Lighter colors are results farther away from zero, and the hue of the color indicates the direction from zero to the result (positive real results are cyan, and negative real results are red; complex values are other colors). Zero itself is perfectly black. Notice how the colors seem to cycle more rapidly as we approach the center from above or below. You can also clearly see a mixture of light and dark, with the boundary becoming more pronounced towards the center. But since the only singularity is at <i>z</i> = 0, the boundary is never truly sharp (or else the derivative would fail to exist). It may appear to turn into black and white close to the origin, but this is an artifact of the rendering. The image would require an infinite resolution to avoid this kind of bleeding. At no point is the function exactly equal to zero, though it comes very close.</div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
Unfortunately, some of us are used to looking at simple line charts, and may find the above presentation disorienting. The notion that a function takes on every value infinitely many times just seems profoundly unintuitive. It turns out that some real-valued functions do this too. Consider this one:</div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: center;">
<i>f</i>(<i>x</i>) = sin(1/<i>x</i>)/<i>x</i></div>
<div style="text-align: center;">
<br /></div>
<div style="text-align: left;">
<a href="http://www.wolframalpha.com/input/?i=graph+sin%281/x%29/x">Here is the graph</a>.<i> </i>As we get closer to <i>x</i> = 0, the amplitude and frequency of the waveform both increase without bound. It takes on every real value infinitely many times, and continues doing so no matter how close we get. The great Picard theorem just says that in the complex plane, all essential singularities look like that.</div>
</div>
</div>
Anonymoushttp://www.blogger.com/profile/11223701982593441719noreply@blogger.com0tag:blogger.com,1999:blog-3598514451084757312.post-45711930242951560372013-08-01T00:13:00.000-04:002013-08-01T00:13:11.382-04:00Singularities happen all the time<blockquote class="tr_bq">
We will soon create intelligences greater than our own. When this
happens, human history will have reached a kind of singularity, an
intellectual transition as impenetrable as the knotted space-time at the
center of a black hole, and the world will pass far beyond our
understanding. -- Vernor Vinge, 1983</blockquote>
The <a href="http://en.wikipedia.org/wiki/Technological_singularity">technological singularity</a> is supposed to occur when we develop "true" or "strong" AI. Beyond that point, we are told, everything will be different, in the most conveniently vague ways. Perhaps society will run on communism, or anarcho-capitalism, or something we don't have a name for (in other words, whatever the author happens to think would be ideal). We are told that the resulting society will be totally incomprehensible to those of us still living in the present day.<br />
<br />
My reaction to all this can be summed up in two words: "So what?"<br />
<a name='more'></a><br />
When Charles Babbage wrote on the analytical engine, could he have foreseen the internet? For that matter, could he have foreseen the processing power of a single modern CPU? Would his contemporaries be able to comprehend the world we live in today?<br />
<br />
"Singularity," so far as I can tell, appears to mean "a point in history beyond which predictions are highly unreliable or entirely inaccurate." It seems to me that almost anything we refer to as a "revolution" should fit this bill. The scientific revolution, the industrial revolution, all the way back to the neolithic revolution. All of these revolutions wrought enormous change and upheaval, and none of them would seem to have admitted many accurate predictions. It is unclear to me how the alleged singularity is supposed to differ from them.<br />
<br />
Yet many believers get extremely worked up about it. Certainly Vinge, above, was rather excited at the prospect. I must admit, the notion of strong AI isn't exactly boring to me, but I fail to understand the broader obsession. Many of the people supposedly championing this idea have proceeded to predict how it will turn out, despite its (alleged) inherent unpredictability.<br />
<br />
The other issue, of course, is that we've known for years now that Moore's law just isn't going to cut it. AI is not a matter of throwing lots of cycles at non-AI-hard problems and the computer somehow "waking up." Intelligence does not need to be designed, as evidenced by human evolution, but nor will it appear <i>ex nihilo</i>. If not explicitly designed, it must be selected for. Humans are not smart because we have a large lump of gray matter in our heads. We are smart because that gray matter is organized in a very particular way. You will not get an AI out of <tt>print "hello world!"</tt>, no matter how many cores you run it on. Machine learning may be helpful here, but <a href="http://xkcd.com/534/">it is not a silver bullet</a>.<br />
<br />
We're nowhere near a functioning strong AI. For that matter, we're nowhere near <i>defining the term</i> "strong AI" in a widely-acceptable way. There are certain setups which would probably qualify under any reasonable definition (such as whole brain emulation), but those setups tend to be the most complex and least feasible designs (no one is going to simulate the physics of individual neurons if they can avoid it, but it's unclear how much "resolution" we can safely give up here), and for that matter, the least interesting setups (yes, a brain in a computer is just as smart as a brain in a human, but who cares?). <br />
<br />
So far as I can tell, almost all features of the AI are up for grabs. Should it pass the Turing test? What about an IQ test? Are any of these conditions <a href="http://en.wikipedia.org/wiki/Sufficient_condition">sufficient</a>, or is it <a href="http://en.wikipedia.org/wiki/Turtles_all_the_way_down">necessity all the way down</a>? What if we're just "teaching to the test?" Maybe we're just making a program good at passing tests and bad at actually thinking for itself (and what does it mean for the program to "think for itself," anyway? I've certainly never had occasion to ask the Python interpreter its opinion on executing a given piece of code). How can tests alone distinguish between "good at tests" and "smart?" And if we're not to use tests, how can we apply machine learning to any of this (or do we propose to hand-code this AI)? I've never seen a proper definition answering these questions.<br />
<br />
I don't doubt that we will eventually develop something which most of us will probably agree qualifies as strong AI. I don't doubt that some rather interesting, even world-shaking, consequences will result. I <i>do</i> doubt that it will happen any time in the immediate future. I do doubt that it will resemble the utopia many authors seem to anticipate. And I most sincerely doubt the value of telling everyone we're on the verge of developing a magical AI that will make everything perfect when we've no idea what such an AI would even look like.Anonymoushttp://www.blogger.com/profile/11223701982593441719noreply@blogger.com0tag:blogger.com,1999:blog-3598514451084757312.post-77192508222074144062013-07-16T20:14:00.000-04:002013-07-16T20:19:36.298-04:00Races and names in Mass EffectI'm a pretty big fan of <i>Mass Effect</i>. One thing I really like about it is the wide variety of races, differentiated in culture as well as appearance. Turians, for instance, rarely commit crimes and will, we're told by the all-knowing codex, readily confess to any crimes which they may have committed. Contrast this with <i>Star Trek</i>, whose Klingons, despite their reputation for "honor" and such, engage in back-room politicking in practically every episode featuring them.<br />
<br />
But this isn't about <i>Mass Effect</i> vs <i>Star Trek</i>. One of the more subtle differences between the <i>Mass Effect</i> races is naming. Asari names are quite different from turian names, reflecting their greatly differing philosophies on life. I thought I'd try to collect some rules for constructing these names.<br />
<br />
I'm going to be using some rather technical terminology here, which I only know because I've spent some time browsing the relevant Wikipedia articles. But, since you probably have better things to do, here's a quick reference:<br />
<dl>
<dt><a href="http://en.wikipedia.org/wiki/Stop_consonant">Stop</a></dt>
<dd>t (but not th), d, k (and "hard" c), "hard" g, b, and p (there's also another stop which we don't have much in English called the <a href="http://en.wikipedia.org/wiki/glottal_stop">glottal stop</a>; it may be represented as an apostrophe or hyphen but there's no hard-and-fast rule).</dd>
<dt><a href="http://en.wikipedia.org/wiki/Fricative_consonant">Fricative</a></dt>
<dd>Lots of things; almost anything that isn't a stop or a sonorant.<br />
<a href="https://en.wikipedia.org/wiki/Affricate_consonant">Some phonemes</a> (specifically ch and j) begin as stops and end as fricatives. </dd>
<dt><a href="http://en.wikipedia.org/wiki/Sonorant">Sonorant</a></dt>
<dd>All vowels including y, m, n, r (not trilled), l, and w. </dd>
</dl>
These are in alphabetical order, with council races first.<br />
<ul>
<li><a href="http://masseffect.wikia.com/wiki/Asari#Notable_Asari">Asari</a>: I see lots of sonorants and some fricatives, but very few stops, and most of those are at the beginning or end of names, while the vowels often form diphthongs; this gives the words an elvish feel, which is apropos since the asari are basically space elves. The names also have a Greek feel; the Asari Republics strongly resemble Ancient Greece, so this is hardly shocking.</li>
<li><a href="http://masseffect.wikia.com/wiki/Drell#Notable_Drell">Drell</a>: We don't meet very many of these, so it's hard to tell. There do appear to be relatively few fricatives, but I can't really be sure.</li>
<li><a href="http://masseffect.wikia.com/wiki/Elcor#Notable_Elcor">Elcor</a>: Again, there really aren't a lot of them, but I will note that every stop is voiceless (t, k, and p) rather than voiced (g, d, and b).</li>
<li><a href="http://masseffect.wikia.com/wiki/Hanar#Notable_Hanar">Hanar</a>: "Blasto" is fictional and looks out of place. The other names seem to have few fricatives and voiceless stops.</li>
<li>Human: Basically modern names, no fancy "smash two names together to make a futuristic-sounding one" shenanigans here.</li>
<li>Keeper: Keepers don't have names.</li>
<li><a href="http://masseffect.wikia.com/wiki/Salarian#Notable_Salarians">Salarian</a>: According to the <a href="http://masseffect.wikia.com/wiki/Salarian#Culture"><i>Mass Effect</i> wiki</a>, salarian names consist of "the name of a salarian's homeworld, nation, city, district, clan name and given name," in that order. They have a lot of fricatives and some stops.</li>
<li><a href="http://masseffect.wikia.com/wiki/Turian#Notable_Turians">Turian</a>: A lot of turian names end in "[i]us." Stops and
fricatives are relatively plentiful, and stops tend to be voiceless rather than voiced, though this is far from universal.
The "[i]us" thing, combined with what I know of turian culture, makes it
apparent that their names are meant to sound Roman.</li>
<li><a href="http://masseffect.wikia.com/wiki/Batarian#Notable_Batarians">Batarian</a>: These resemble the turians, but with a more even balance of voiced and voiceless stops.</li>
<li>Collector: Collectors don't have names.</li>
<li>Geth: Geth usually take designations rather than names as-such. The easiest way to do that is something like "<a href="https://twitter.com/ThisUnit1025">Unit 1234</a>" (side-note: 1025's <a href="https://twitter.com/ThisUnit1025/status/308766534342422529">assertion</a> that the number 1025 is meaningful is bullshit; the significant numbers in that neighborhood are 1024 and 102<i>3</i>. The explanation it gives is even more wrong since 2<sup>10</sup> = 1024 has 11 digits in binary, much like 10<sup>10</sup> has 11 digits in decimal.).</li>
<li><a href="http://masseffect.wikia.com/wiki/Krogan#Notable_Krogan">Krogan</a>: Krogan names are composed of a clan name (such as "Urdnot") and a personal name (such as "Wrex"). <i>Mass Effect 3</i> says that krogan personal names are selected via males having belching contests. "Bakara" doesn't sound like a belch, so I'm guessing this is only for male names. A belch-name should consist of an optional stop followed by a series of sonorants and fricatives, possibly terminated with another stop; moreover, it will probably be monosyllabic or nearly so. This is based on the assumption that a belch consists of a single continuous expulsion of breath; if there are stops in the middle, it isn't continuous (air stopped coming out, hence the name "stop"). "Fortack," "Okeer," and "Skarr" clearly break this rule, but the other names mostly seem consistent with it. The latter two can be explained as someone cheating, starting to make sounds before the real belch began, but "Fortack" just doesn't seem like it could possibly occur as a belch. Maybe someone stuck the t in afterwards.</li>
<li>Leviathan: We don't really have enough information.</li>
<li><a href="http://masseffect.wikia.com/wiki/Quarian#Notable_Quarians">Quarian</a>: FirstName'LastName nar/vas ShipName. The names tend to be monosyllabic, which makes a kind of sense since the population of any given ship is small and the ship's name can be used to disambiguate; there's no need for elaborate names. Diphthongs are rare; the only one I can see is "Rael."</li>
<li>Raoli: We don't see any of them and I only know of them through the wiki.</li>
<li>Reaper: "Nazara" and possibly "Harbinger." That's not enough names to generalize.</li>
<li><a href="http://masseffect.wikia.com/wiki/Virtual_Alien">Virtual Alien</a>: Uh... who are these guys?</li>
<li><a href="http://masseffect.wikia.com/wiki/Vorcha#Known_Vorcha">Vorcha</a>: It's hard to generalize. Most of the galaxy regards vorcha as vermin, and pays relatively little attention to their individual names.</li>
<li>Yahg: We don't know anything about yahg names.</li>
</ul>
Anonymoushttp://www.blogger.com/profile/11223701982593441719noreply@blogger.com0tag:blogger.com,1999:blog-3598514451084757312.post-3907479134071709672013-06-26T21:56:00.001-04:002013-08-01T23:20:17.686-04:00DOMA is alive... for nowEarlier today, the Supreme Court struck down the Defense of Marriage Act.<br />
<br />
Well, actually, that's not technically true. SCOTUS struck down section 3 of DOMA, which prevents the federal government from giving marriage benefits to same-sex couples. Section 2 of DOMA, which permits a state without same-sex marriage to deny recognition of same-sex marriages from other states, is still in force, at least for the moment. But section 2 is a legal nightmare.<br />
<br />
Ordinarily, when you sign a contract, its validity is a federal matter. Either it is valid in every state, or it is valid in no state. Marriage no longer works that way. Suppose two men are married in New York. Under <i>Windsor</i>, the federal government now recognizes that marriage, and extends tax and other benefits to them. But then they travel to Texas, which does not recognize same-sex marriage. Suddenly, they are single. Or are they? It's unclear whether the federal government should apply New York or Texas law in determining benefits. But clearly, as far as the state of Texas is concerned, the men are single. <br />
<br />
Next, they go to California, where (it would seem, given the outcome of <i>Perry</i>) same-sex marriage is recognized. Are they married again? Did their original marriage contract from New York survive this transition? Or did it vanish at the Texas border? If it did, then that suggests a contract has been dissolved without any legal process, which seems troubling to me. If it didn't, then why wasn't it in force in Texas? Was it in abeyance somehow?<br />
<br />
If the contract was in some kind of legal limbo, but not actually dead, this suggests a rather interesting situation. A contract is valid but unenforceable thanks to a provision of Texas's state laws. State laws aren't allowed to impair contracts under the <a href="http://en.wikipedia.org/wiki/Contract_Clause">Contract Clause</a>. But maybe Congress can authorize them to do so via DOMA. Let's consider that.<br />
<br />
Section 2 of DOMA is as follows:<br />
<blockquote class="tr_bq">
No State, territory, or possession of the United States, or Indian
tribe, shall be required to give effect to any public act, record, or
judicial proceeding of any other State, territory, possession, or tribe
respecting a relationship between persons of the same sex that is
treated as a marriage under the laws of such other State, territory,
possession, or tribe, or a right or claim arising from such
relationship.</blockquote>
<br />
I find the term "required" rather interesting in this context. Required by whom, exactly? If it means "required by the courts," then this seems an entirely inappropriate attempt to dictate the outcomes of court cases. Under the doctrine of separation of powers, Congress isn't supposed to be doing that.<br />
<br />
On the other hand, if it refers to constitutional requirement (i.e. "required by the constitution"), that really isn't much better. If the constitution says one thing, and the law says something else, generally the constitution wins. Laws aren't allowed to dictate how the constitution is interpreted; again, that's a matter for the judiciary.<br />
<br />
Just about the only "required" that I believe Congress <i>could</i> refer to here would be "required by federal law." But if that's what the statute means, I don't think it will have any effect whatsoever. I'm not aware of any attempts by federal law to require Texas to recognize a same-sex marriage.<br />
<br />
In conclusion, it's not at all clear to me that DOMA section 2 even <i>needs</i> to be challenged on due process and equal protection grounds. It could fall to separation of powers.<br />
<br />
Updated: It's been brought to my attention that the contract clause is inapplicable to marriage contracts under longstanding precedent. This is why I'm not a lawyer. All the same, there are quite a few interesting questions raised above, so I'm leaving this post up. Anonymoushttp://www.blogger.com/profile/11223701982593441719noreply@blogger.com0tag:blogger.com,1999:blog-3598514451084757312.post-78302392727737495372013-06-13T23:26:00.000-04:002013-06-13T23:33:18.523-04:00Why I am not recommending GeoNodeFor the past few weeks, I've been working for a professor on a project involving geographical data. As part of this project, I was asked to evaluate <a href="http://geonode.org/">GeoNode</a>. So I looked at the website, and after wandering around for a while trying to get past the usual marketing bullshit (side note: Can anyone explain to me why so many open source projects these days have such enormous quantities of marketing bullshit?), I eventually found <a href="http://geonode.org/workshops/devel/">some real documentation</a>. It was in the form of a developer "workshop," however, so I was a bit leery of it.<br />
<br />
<a name='more'></a>After some further exploration, I determined that GeoNode was probably too large and complex for our uses; we have existing code and would likely need to scrap most of it, and a lot of the features GeoNode touts just aren't important to us (e.g. social media integration). However, on the off-chance I was being overly pessimistic, I was asked to demo it anyway. While trying to figure out how to get started, I found <a href="http://docs.geonode.org/en/latest/index.html">more documentation</a>, this time in <a href="https://readthedocs.org/">Read the Docs</a> format. Since Read the Docs is quite common in the Python world, I decided the latter documentation was probably more authoritative and certainly better organized.<br />
<br />
I began with the <a href="http://docs.geonode.org/en/latest/intro/install.html">quick installation guide</a>. I use Ubuntu on my laptop, so I assumed it would be a simple matter of adding a PPA and <tt>sudo apt-get install geonode</tt>. Oops, there's no Raring candidate yet, we'll have to use the Quantal version instead. I moved on to the <a href="http://docs.geonode.org/en/latest/deploy/production.html">configuration instructions</a>. Now, I only wanted to set up a little demo, so "production" is probably inapplicable in my case, but onwards all the same.<br />
<br />
<pre>kevin@odin:~$ geonode createsuperuser
/usr/sbin/geonode: line 3: django-admin.py: command not found</pre>
<br />
OK, that didn't work. <tt>/usr/sbin/geonode</tt> is mercifully a really simple shell script, so I just change <tt>django-admin.py</tt> to <tt>django-admin</tt> and try again.<br />
<br />
<pre>kevin@odin:~$ geonode createsuperuser
Unknown command: 'createsuperuser'
Type 'django-admin help' for usage.</pre>
<br />
Well, that's interesting on multiple levels. <tt>createsuperuser</tt> is a perfectly valid <tt>django-admin</tt> subcommand, and even appears in its help. But usually, it's executed in the context of an existing Django project via <tt>manage.py</tt>. The instructions I was given never told me to change directories or anything, they just said to run this and it would work...<br />
<br />
I look through the <a href="http://docs.geonode.org/en/latest/deploy/install.html">complete installation guide</a>, but that's about installing from source. I certainly hope I'm not going to have to install from source.<br />
<br />
I assume the PPA created a Django project for me, and confirm this by pointing a browser at localhost; I'm serving what appears to be a full-blown GeoNode installation via WSGI on port 80, but it hasn't <a href="https://docs.djangoproject.com/en/dev/ref/contrib/staticfiles/">had its static files collected</a> yet, so it looks ugly and broken. If I want to do anything with this project, I'll need to first find it. So I run a <tt>locate geonode</tt> and come up empty. Well, duh, I need to do <tt>sudo updatedb</tt> first! With that sorted, I try the locate again and find lots of interesting files and directories, none of which <i>quite</i> look like a Django project. I do manage to find a <tt>settings.py</tt> and <tt>urls.py</tt>, but the overall structure of the containing directory is nothing like any Django app I've ever seen. I can't find a <tt>manage.py</tt> and am basically flying blind.<br />
<br />
At this point, I decide to see if the first set of documentation might be more user-friendly. Blundering around, I find <a href="http://geonode.org/workshops/devel/projects/setup.html">a page for creating custom GeoNode projects</a>. These instructions look nothing like the instructions I found earlier and tell me to create a Django project. At this point I'm pretty desperate to get something, <i>anything</i>, working, so I just go ahead and do so. That works (though not without creating a <a href="http://pastebin.com/U2yCcu4m">rather worrying error</a>; I also had to steal <tt>settings.py</tt> from the other installation), but I've still got Apache/WSGI serving port 80. It's around this time that I turn to actually getting our data into the application, never mind which application.<br />
<br />
GeoNode may be a Django project, but it doesn't actually use Django's ORM, so far as I can tell. Instead, it uses a system called <a href="http://geoserver.org/">GeoServer</a> to manage various sources of data. Since they need to support non-SQL data, I suppose this makes sense, but I'm not feeling very charitable right now and the involvement of yet another layer of abstraction irks me. GeoServer has apparently been set up properly by the PPA, so I don't need to do very much. The default username and password aren't working, though, apparently because the PPA was intended for a production deployment, and we all know you can't have a default admin password on a production system. I try to interact with <a href="http://docs.geoserver.org/stable/en/user/datadirectory/data-dir-structure.html">GeoServer's config files</a>, but get nowhere because GeoServer wants you to use its web interface or RESTful API. After fiddling with my GeoNode installation, I eventually manage to create a functional superuser and get into GeoServer. Why does GeoNode auth affect GeoServer, seeing as they're independent? Dunno. From this point, things actually go well enough that I don't really feel compelled to finish the story, but I will say this: I <i>still have</i> that WSGI installation on port 80, and I'm going to keep it until I demo the custom installation because I don't want to break the latter.<br />
<br />
So why won't I be recommending this? Well, aside from it not actually serving our needs (we already have some Django code, HTML, and other nice things that we'd have to rebuild or scrap), it has two entirely separate sets of documentation. They don't really 100% agree with each other, and I wasn't able to get a simple installation done without hacking shell scripts, running <tt>locate</tt> commands, and generally making a lot of guesses. Oh yeah, and I also had to install a Quantal PPA when Raring has been out for over a month and a half.<br />
<br />
Yes, I could've just asked on a mailing list about this. But that's not good enough: I was asked to have a report on this ready within the day, and mailing lists can take days to respond at all and weeks to resolve issues. More abstractly, if I need to run to a mailing list <i>just to install the damn thing</i>, we're in trouble. Under the circumstances, how can I recommend this in good faith? If something goes wrong next week and I'm asked to fix it, how can I do so when I don't know the first thing about the installation? If I was asked to roll my custom installation out to production right now, I wouldn't have the first clue where to start, particularly since the PPA already created and configured a production setup entirely by itself. I suppose I'd need to graft the custom installation onto that. How should I? No idea.<br />
<br />
Documentation isn't just for clueless end users; we saw this way back in 2006 when <a href="http://www.catb.org/esr/writings/cups-horror.html">ESR played with CUPS</a>. A fancy landing page may be very nice and professional-looking, but it doesn't matter if you don't have usable instructions.<br />
<br />
This is not a case of not having enough documentation, either. This is a problem of never testing the documentation. At some point, someone who knows little or nothing about the project should be given the installation instructions and asked to follow them. Instead, this documentation assumes the end user knows where the packages will put the installation.<br />
<br />
Of course, other packages might have put the installation somewhere else, so maybe the documentation <i>can't</i> have the installation location. That's no excuse. If the packages were capable of customizing the installation, they should have prompted for it with Debconf (and included a sane default). If they're <i>weren't</i>, they should all harmonize on a particular sane default and document it.<br />
<br />
One last thing: I'm sure GeoNode is a great product. This isn't intended to imply otherwise. But right now, its crappy documentation is tarnishing its brand.Anonymoushttp://www.blogger.com/profile/11223701982593441719noreply@blogger.com3tag:blogger.com,1999:blog-3598514451084757312.post-47364356913659913452013-05-21T19:38:00.000-04:002013-05-21T19:38:31.180-04:00"Studies have shown..."We've all been told that "studies have shown" something at one time or another. Sometimes, our interlocutor is kind enough to give us a citation (and sometimes they aren't). Well, let's do a thought experiment (if you're already familiar with significance testing, feel free to skim the next paragraph).<br />
<br />
Suppose you give 20 labs a drug and a placebo, and tell them to test one against the other in clinical trials. But instead of actually giving them a drug and a placebo, you give them two identical placebos (originally, I was going to use a<a href="http://rationalwiki.org/wiki/Homeopathy"> homeopathic remedy</a> vs. a placebo, but I didn't want to get sidetracked). Assume the labs all use large sample sizes, statistical normalization, double blinding, and various other best practices. None of them make any mistakes (or <a href="http://en.wikipedia.org/wiki/MMR_vaccine_controversy">outright fraud</a>, for that matter) and they all conduct proper, well-designed experiments. Even under these ideal conditions, one of those labs (on average, and for pedants, we're assuming they all use α=5%) will tell you there's a statistically significant difference between the placebo and itself.<br />
<a name='more'></a><br />
Now, in and of itself, this is less of a big deal than it sounds because the other nineteen identical experiments will tend to drown the one out. But think about what makes the news. When was the last time you read about an experiment which failed to prove anything (for pedants: the <i>proper</i> phrasing is "failed to reject the null hypothesis")? They do <a href="http://arstechnica.com/science/2011/10/massive-15-year-study-finds-no-link-between-cell-phones-cancer/">sometimes</a> make the news, but not often enough, and frequently only after positive results have drawn attention to the area. Many journals do require advance notice of an experiment, so they can at least keep an accounting of failed experiments and recognize this in peer review, but the tendency of the news media to latch onto singular results is nonetheless disturbing. A single result <i>does not</i> mean something is "proven." It means the area merits further exploration and study.<br />
<br />
There are, however, some exceptions to this rule. The most important are metastudies, which take whole groups of studies into consideration. A metastudy of 20 studies with one positive and 19 negative would give a negative result. Of course, we're still assuming a best-case scenario, particularly that those 19 negatives are available to the metastudy's authors. Still, meta-analysis provides welcome insight, and is a good sign of the maturity of an area of research. If enough studies have been performed for a metastudy to be published, there's probably enough evidence for one side to be clearly right.Anonymoushttp://www.blogger.com/profile/11223701982593441719noreply@blogger.com0tag:blogger.com,1999:blog-3598514451084757312.post-57769522918506666782013-04-25T22:50:00.000-04:002013-04-25T22:50:09.385-04:00Nihilism and optimism<blockquote cite="http://dictionary.reference.com/browse/meaning" class="tr_bq">
<dl>
<dt><a href="http://dictionary.reference.com/browse/meaning">meaning</a></dt>
<dd>n. the end, <b>purpose</b>, or significance of something
</dd></dl>
NB: Emphasis added.
</blockquote>
Consider the definition given above by Dictionary.com. We're told the meaning of something is its purpose. If a thing has a purpose, that purpose must have been intended by someone. If we're told something has "inherent meaning," we must ask from whence this intent comes. There doesn't seem to be an obvious answer to this question.<br />
<a name='more'></a><br />
We might say that this intent originates with God. But personally, I don't believe in any god, <a href="http://en.wikipedia.org/wiki/Abrahamic_faith">capital-G</a> or otherwise. The existence or non-existence of God has been a point of philosophical contention for quite some time. There have been claims of proofs and disproofs on both sides. I'm not interested in getting bogged down in that kind of argument today. However, I'd like to note that many of the more credible arguments in favor of God's existence have greatly weakened the definition of "god." The <a href="http://en.wikipedia.org/wiki/Cosmological_argument">cosmological argument</a> is a good example of this. If you accept it (and many people do not), it establishes the existence of, well, <i>something</i>, which existed necessarily. What that means is that this whatever-it-is <i>must</i> have existed, as a <i>logical necessity</i>. But that's <i>all</i> we know about this object. We do not know that it still exists. We do not know that it can think. We <i>certainly</i> do not know that it is omnipotent. In my opinion, calling such an object a "god" is misleading. More to the point, we cannot ascribe intention to it without first proving it capable of intending things.<br />
<br />
So where does that leave us? God's existence is debatable, and I'm not satisfied by invoking him, her, or it as a source of meaning.<br />
<br />
We might simply ascribe this intent to any convenient being. Certainly, there are enough of us on the planet. But this really doesn't make very much sense. We're saying that the meaning of, say, life is whatever John Doe intends? That is entirely untenable.<br />
<br />
My conclusion is that we bring <i>our own</i> meaning to the table. The world is what we make of it. But we can hardly call this meaning "inherent," if it's different for each person. So there is no inherent meaning. All meaning is synthetic, constructed by us for us.<br />
<br />
This seems rather depressing, at first. Why construct meaning if it's all artifice? What's the use? We'll never get any "real" meaning, will we?<br />
<br />
I couldn't disagree more.<br />
<br />
Since all meaning is synthetic, this division of "real" meaning and "fake" meaning is, well, meaningless. All meaning is "real." The meaning I derive for my life is just as valid as the meaning you derive for yours. Far from diminishing us, the concept liberates us to think and believe what we like. But what about reality? Shouldn't we aspire to have some relationship between our beliefs and the real world?<br />
<br />
Of course. But we can do that from <i>within</i> this "meaning-relative" framework. We can invent <a href="http://en.wikipedia.org/wiki/Mathematics">symbols</a> and <a href="http://en.wikipedia.org/wiki/Axiom">rules</a> to abstract the details away; we can then <a href="http://en.wikipedia.org/wiki/Hypothesis">model</a> those details in a rigorous fashion, and <a href="http://en.wikipedia.org/wiki/Scientific_method">perform experiments</a> to validate or falsify our models. These abstractions may be very valuable for describing and predicting the universe. But if they are wrong, we don't hesitate to scrap them and start over. They are models, not truths. There are different ways of modeling these things, different sets of underlying symbols and rules. Which of these are correct? None of them and all of them. The universe is what it is; a model may be very true to the universe, but it is not "inherently true." It is, ultimately, a construct.<br />
<br />
We're just manipulating constructs, then? Why bother?<br />
<br />
These constructs are meaningless to the universe as a whole. Personally, I do <i>not</i> identify as "the universe as a whole." Perhaps some members of my audience do, but if that is the case, I'm afraid they are simply beyond my reach. For the rest of us, then, it is quite possible to ascribe meaning and value to these models, and to anything else we please.<br />
<br />
I have heard a rather interesting objection to this argument. The objection goes something like this:<br />
<blockquote class="tr_bq">
You say that classical logic is no better than any other kind of logic, and has no inherent meaning. Yet, in the same breath, you use basic logical principles and reasoning to make your point. How can you rely on these supposedly "meaningless constructs" in this way?</blockquote>
These constructs are meaningless to the universe, not to me, and not to you. I wrote this post in English. I could have, with some difficulty, written it in Spanish, or, with significantly greater difficulty, in Japanese or some other language. But English is, ultimately, just a collection of words, symbols, and self-referential definitions. I use English to express my ideas because it is convenient, not because it is somehow "superior" to Spanish. Similarly, I use classical logic instead of <a href="http://en.wikipedia.org/wiki/Paraconsistent_logic">paraconsistent logic</a> or (were I feeling adventurous) <a href="http://principiaconcordia.blogspot.com/2012/04/truth-in-turnstiles.html">lambda calculus</a> because it is convenient. Anonymoushttp://www.blogger.com/profile/11223701982593441719noreply@blogger.com0tag:blogger.com,1999:blog-3598514451084757312.post-56875982141013393942013-04-15T17:54:00.002-04:002013-04-15T17:54:10.080-04:00The tragedy of music: part threeIn <a href="http://principiaconcordia.blogspot.com/2013/04/the-tragedy-of-music-part-one.html">part one</a>, we discussed some issues with Pythagorean tuning, and in <a href="http://principiaconcordia.blogspot.com/2013/04/the-tragedy-of-music-part-two.html">part two</a> we continued to quarter-comma meantone. The logical conclusion is <b>equal temperament</b>, a widely used (though not quite universal) modern system.<br />
<br />
Equal temperament is, in a way, simpler than any of the earlier systems. Instead of building an octave out of some fixed ratio, we <i>start</i> with an octave and subdivide it. The most natural way to do this is to make a semitone the twelfth root of two. Note that this is simply "a semitone" rather than a chromatic or diatonic semitone; in equal temperament, those are the same thing. Everything else can be built rather easily out of this fundamental unit, and we don't have overlapping or separated octaves. This, in turn, means no wolf fifth.<br />
<br />
Still, this fundamental unit can be unwieldy. Writing out "the twelfth root of two" all the time is annoying, and the mathematical notation for it is rather ugly. For this purpose, the so-called <b>cent</b> was invented. There are 1200 cents in an octave; 100 make up a semitone. It is a logarithmic unit. Increasing a tone by 1200 cents means doubling its frequency. 700 cents make up a perfect fifth, and 400 cents are a major third. The cent is a unit of relative measure; it is not meaningful to equate a single note to a given number of cents, except in relation to another note. That "other note" is often middle A, which, for convenience, is typically tuned to 440 Hz.<br />
<br />
This system is very practical, which accounts for its widespread use. However, this is not a happy story with a happy ending; there is a problem. The twelfth root of two is an ugly, irrational number, and all of the other intervals are powers of it. There are no simple integer ratios at all. While the cent may make the system look nice and clean, it is an artificial unit created to hide the irrational semitone. In effect, we've taken the sourness of the wolf fifth and extended it over the whole octave, spreading thinly to mask the dissonance. Practical this may be, but ideal it is not.<br />
<br />
This leads me to a stark realization: the Platonist's ideal of "perfect yet unattainable" forms is inapplicable to music. These problems are not of a physical nature. <i>Any</i> musical system will suffer from them, except for the trivial system which has one note per octave. There is a fundamental disconnect between equal temperament and just intonation. We cannot have both, even in theory. And <i>that</i> is the tragedy I've been talking about.Anonymoushttp://www.blogger.com/profile/11223701982593441719noreply@blogger.com0tag:blogger.com,1999:blog-3598514451084757312.post-32545244752788152132013-04-08T18:38:00.000-04:002013-04-15T17:55:04.152-04:00The tragedy of music, part twoIn <a href="http://principiaconcordia.blogspot.com/2013/04/the-tragedy-of-music-part-one.html">part one</a>, we discussed Pythagorean tuning and its failings. Despite this crucial issue, Pythagorean tuning was highly influential on musical theory.<br />
<br />
In the 1500's, a variant of Pythagorean tuning called <b>quarter-comma meantone</b> became popular. The "comma" in this name is <i>not</i> the Pythagorean comma we saw last time, but an entirely different comma.<br />
<br />
A major third is an interval spanning three staff positions and four semitones. It is not considered "perfect" in the same way as the perfect fifth, but is still regarded as a consonant interval, at least in theory. Under Pythagorean tuning, the major third was a rather dissonant 81:64. Quarter-comma meantone flattened this to a nicer 5:4, at the expense of a more dissonant perfect fifth.<br />
<br />
Interestingly, we now have irrational intervals: the perfect fifth is the fourth root of 5. This way, if we move up by four perfect fifths, and down by two octaves, we end up at 5:4, the justly-intoned major third.<br />
<br />
This sort of trade-off is debatable, of course, but it was an explicit design goal of quarter-comma meantone; the flatter fifth was viewed as an acceptable price for a just third.<br />
<br />
Now, suppose we start at middle C, as we did last time, and move up a major third. We arrive at middle E at a ratio of 5:4. Next we move up again to G♯, at a ratio of 25:16. Finally, we get to C<sub>5</sub>, at a ratio of 125:64. This is not an octave, but unlike in Pythagorean tuning, the interval is too flat rather than too sharp. This means there's a gap between the octaves, unlike the overlapping octaves of Pythagorean tuning.<br />
<br />
Like in Pythagorean tuning, one of the fifths must span this gap. That fifth is sharpened by 128:125, a much larger interval than the Pythagorean comma. It sounds extremely dissonant, to the point that it became known as the "wolf fifth" because it sounds like a wolf howling at the moon. This moniker is sometimes also applied to the diminished sixth produced by Pythagorean tuning under the equivalent problem, but note that quarter-comma meantone is <i>much</i> worse in this regard.<br />
<br />
Updated: In <a href="http://principiaconcordia.blogspot.com/2013/04/the-tragedy-of-music-part-three.html">part three</a>, we discuss modern equal temperament. Anonymoushttp://www.blogger.com/profile/11223701982593441719noreply@blogger.com0tag:blogger.com,1999:blog-3598514451084757312.post-61554144342339958932013-04-06T15:13:00.000-04:002013-04-08T18:39:39.407-04:00The tragedy of music, part oneOnce upon a time, people noticed that certain sounds are pleasant to hear, while others are not. They began to experiment with it. The most important early discovery in this vein was the notion that sound is vibration. This is the fundamental principle of all musical theory, modern or ancient. The next discovery was that of the octave: If you make two vibrations, one at double the frequency of the other, they sound... the same, in some way. The faster one is clearly higher pitched, but they're harmonically equivalent.<br />
<br />
The Ancient Greeks built upon this discovery, and eventually produced what we now call <b>Pythagorean tuning</b>. Pythagorean tuning is built around simple frequency ratios; systems like this are called "just intonations." The most important ratio other than the octave is the so-called "perfect fifth," which describes a gap of five staff positions in modern musical notation. It spans a gap of seven semitones; there are a total of twelve semitones per octave in "conventional" systems such as Pythagorean tuning. In Pythagorean tuning, a perfect fifth is a ratio of 3:2, meaning the faster vibration oscillates thrice for every two oscillations of the slower vibration. So far, this is all very nice, and fits in well with the Pythagorean mathematical ideals of rationality. But there's a problem. I told you that a perfect fifth has a ratio of 3:2, is composed of seven semitones, twelve of which make up an octave, and the octave is 2:1. Suppose we start at middle C (or C<sub>4</sub>), and move a perfect fifth up. We arrive at G<sub>4</sub>, at 3:2 times our original frequency. We continue to D<sub>5</sub>, at 9:4. Next comes A<sub>6</sub>, at 27:8. Then E<sub>6</sub>, at 81:16, B<sub>7</sub>, at 243:32, F♯<sub>7</sub>, at 729:64, C♯<sub>8</sub>, at 2187:128, and finally G♯<sub>8</sub> at 6561:256. Now suppose we go <i>down</i> from C<sub>4</sub>. We find ourselves at F<sub>3</sub> at 2:3, B♭<sub>3</sub> at 4:9, E♭<sub>2</sub> at 8:27, and A♭<sub>1</sub> at 16:81. The ratio from C<sub>4</sub> to A♭<sub>1</sub> is 81:16, and the ratio from G♯<sub>8</sub> to C<sub>4</sub> is 6561:256. Multiplying, we see that the ratio from G♯<sub>8</sub> to A♭<sub>1</sub> is 531441:4096. But A♭ and G♯ are supposed to be the same note. Going by octaves, we <i>should</i> get 128:1. The other ratio is rather large and unwieldy, because we have seven octaves of space, so if we divide those out, we get a ratio from G♯ to A♭ of precisely 531441:524288. This interval is called the Pythagorean comma. It's the difference (remember, in music, we never add or subtract frequencies, so this is actually the <i>ratio</i>) between a chromatic and a diatonic semitone, among other things.<br />
<br />
So what does this actually mean? It means Pythagorean octaves overlap, if only a little, since G♯ is a little sharper than A♭. That's a problem for perfect fifths. One of the twelve "perfect" fifths is ruined by spanning this overlap, being flattened to a rather dissonant diminished sixth. So Pythagorean tuning, for all its beauty and simplicity, is not perfect.<br />
<br />
Updated: In <a href="http://principiaconcordia.blogspot.com/2013/04/the-tragedy-of-music-part-two.html">part two</a>, we discuss quarter-comma meantone, a derivative of Pythagorean tuning.Anonymoushttp://www.blogger.com/profile/11223701982593441719noreply@blogger.com0tag:blogger.com,1999:blog-3598514451084757312.post-66839428801855715882013-01-18T02:55:00.000-05:002013-01-19T07:48:03.608-05:00One-time pads with PythonA <a href="http://en.wikipedia.org/wiki/One-time_pad">one-time pad</a> is a kind of unbreakable encryption. For most encryption, breaking it is a matter of throwing a lot of computational resources at the problem. Typically, the amount of resources needed greatly exceeds the amount that is practical to obtain, so most modern cryptography is secure enough. There are, however, some downsides to modern cryptography, the biggest of which is its complexity. If we want to use crypto for something like bank transactions, complexity is not that big of a deal, because centralization can hide most of the complexity from end-users. But if, for instance, you need to implement secure communications without a centralized <a href="http://en.wikipedia.org/wiki/Certificate_authority">certificate authority</a>, effectively implementing secure communications becomes a lot harder.<br />
<a name='more'></a><br />
But why would you not <i>want</i> to have a CA? Well, suppose you're a dissident in an oppressive state like China. You want to communicate securely with other dissidents, but can't or won't register with any sort of centralized organization. You have a small group of closely-trusted individuals who you can meet in real life, but only infrequently, to avoid arousing suspicion. You <i>could</i> still use something like a <a href="http://en.wikipedia.org/wiki/Web_of_trust">web of trust</a>, but it's overkill for your plans; all you need to do is send short encrypted messages to individuals, and having dedicated PGP software on your computer could be incriminating.<br />
<br />
So if modern cryptography doesn't fit these needs, what's so great about a one-time pad? It's <i>simple</i> yet perfectly secure if done correctly. Enough discussion, let's have some code.<br />
<br />
Here's the code to create keys:<br />
<br />
<pre><code>import os
import base64
LENGTH = (140 * 3)/4 # = 105
secret = os.urandom(LENGTH)
print base64.b64encode(secret, '#*') </code></pre>
<br />
These keys are <i>just</i> small enough to be texted or tweeted, but that's a bad idea because you'd be doing so "in the clear." Instead, you could save them to a flash drive or print them out as QR codes. Printing them has the advantage that paper is easier to destroy or render unreadable than flash drives, and it's hard to be sure you've properly overwritten data on a solid-state device. On the other hand, a flash drive is more concealable.<br />
<br />
The <tt>os.urandom</tt> function is <a href="http://docs.python.org/2/library/os.html#os.urandom">supposed to be</a> "suitable for cryptographic purposes," but that doesn't mean it's perfectly random; one-time pads are a <i>lot</i> more sensitive to this sort of thing than typical modern cryptography. This could be a concern for the safeness of our key. On Linux in particular, it uses <tt>/dev/urandom</tt> when I'd personally be more comfortable using <tt>/dev/random</tt> and instructing the user to mash the keyboard until the program continues. On Windows, <a href="http://msdn.microsoft.com/en-us/library/windows/desktop/aa379942%28v=vs.85%29.aspx">it seeds a PRNG with a lot of disparate sources of entropy</a>, and is <i>probably</i> safe enough for 105 bytes at a time. I'd be concerned about longer keys, however.<br />
<br />
Here's the code to encrypt/decrypt something:<br />
<br />
<pre><code>import base64
key = "..." # Get from input or file
secret = base64.b64decode(key, '#*')
plaintext = u"Here's some secret text we want to encrypt.".encode('utf-8')
encrypted = bytearray()
(secret, plaintext) = (bytearray(secret), bytearray(plaintext))
for (pbyte, sbyte) in zip(plaintext, secret):
encrypted.append(pbyte ^ sbyte) # remember, ^ is XOR
print base64.b64encode(str(encrypted), '#*')</code></pre>
<br />
If the plaintext is limited to 105 bytes of UTF-8, the encrypted output will be 140 characters or less (tweet-sized); moreover, those characters will be A-Z, a-z, 0-9, and the characters #, *, and =, meaning they'll be fine to text as well since <a href="http://en.wikipedia.org/wiki/GSM_03.38">the standard GSM 7-bit alphabet includes all three</a>. Very cheap handsets might not have a key for the = symbol, but since it only occurs at the end, and then only up to twice, it's not very difficult to work around (e.g. send a second text saying "That last one ended with two equals symbols."). If the plaintext is longer than the key, it'll be truncated to the length of the key.<br />
<br />
If you want to decrypt something, just encrypt it again with the same key. You'll need to Base64 decode at the beginning instead of Base64 encoding at the end, but it's otherwise exactly the same.<br />
<br />
It should go without saying that a key should be used only once, hence the name "one-time pad." Reusing a key, even once, renders the encryption ineffective at best.<br />
<br />
The obvious downsides to this system are symmetry and lack of authentication. Symmetry means the key to encrypt is the same as the key to decrypt. This poses practical challenges because the key needs to be secretly distributed to only those people you trust, while an asymmetric system allows public keys to be redistributed freely without endangering security. The lack of authentication means there's no way for the sender to prove his identity to the recipient, nor <i>vice versa</i>. If the authorities obtain a copy of the keys, they can masquerade as either party.Anonymoushttp://www.blogger.com/profile/11223701982593441719noreply@blogger.com0tag:blogger.com,1999:blog-3598514451084757312.post-53851190722807746722012-08-08T15:07:00.000-04:002012-08-08T15:13:21.096-04:00An exerciseSometimes, I feel like stretching my programming muscles. A while ago, I read about this problem, or one like it. It's not an especially hard problem, but I'd like to go over it anyway, because low-level data types can be unintuitive.<br />
<br />
Suppose you have two integers <i>a</i> and <i>b</i>. You want to add them, but you also want to be sure they don't <a href="http://en.wikipedia.org/wiki/Arithmetic_overflow">overflow</a>. You're working in a language where overflow is undefined, such as C. You can't use arbitrary precision variables or anything fancy like that, nor can you make use of low-level things like CPU overflow detection. You need to guard against overflow <i>mathematically</i>, without the use of any of those systems. Maybe you're working on a reduced architecture that doesn't provide overflow detection. Maybe your language or library is deficient and lacks an arbitrary precision integer type. It doesn't really matter. The point is, taking away all those "outs", we're left with a somewhat interesting problem.<br />
<br />
Here are some assumptions you may make:<br />
<ol>
<li>You have a constant called <code>INT_MAX</code> which is equal to the largest integer which can be represented on your system. You also have another called <code>INT_MIN</code>, which serves a similar purpose for the smallest (most negative) integer.</li>
<li>You're working in <a href="http://en.wikipedia.org/wiki/Two's_complement">two's complement</a>, but overflow is still undefined.</li>
<li>You <i>do not</i> have the exact number of bits available (but you could figure it out from <code>INT_MAX</code>, so that's not much of a restriction).</li>
<li>You may throw an exception to indicate an overflow condition.</li>
</ol>
Actually think about this problem for a few minutes. I'll put the solution after the break. I really think most people who are likely to read this blog can solve this on their own, so please actually try this.<br />
<a name='more'></a><br />
Here's the solution:
<pre>int safeAdd(int a, int b){
const int halfMax = INT_MAX/2; //rounded down since INT_MAX is odd
const int halfMin = INT_MIN/2;
//First, check if they're too big:
if(a > halfMax && b > halfMax){
throw overflowException();
}
if((a == INT_MAX && b > 0) || (b == INT_MAX && a > 0)){
throw overflowException();
}
if(a > halfMax){
a -= halfMax; // make a small enough to add it to b
if(a + b > halfMax){
throw overflowException();
}
return a + b + halfMax;
}
if(b > halfMax){
b -= halfMax;
if(a + b > halfMax){
throw overflowException();
}
return a + b + halfMax;
}
//They're not too big. Check if they're too small:
if(a < halfMin && b < halfMin){
throw underflowException();
}
if(a < halfMin){
a -= halfMin; //halfMin is negative, so this *increases* a
if(a + b < halfMin){
throw underflowException();
}
return a + b + halfMin;
}
if(b < halfMin){
b -= halfMin;
if(a + b < halfMin){
throw underflowException();
}
return a + b + halfMin;
}
//They're neither too big nor too small, so:
return a + b;
}</pre><br />
The second if statement looks redundant to the third and fourth, and is not mirrored in the negatives section. Determining its purpose is left as an exercise for the reader.<br />
<br />
Disclaimer: While I have put a fair amount of thought into the above code, I have <em>not</em> tried it, so use it at your own peril!Anonymoushttp://www.blogger.com/profile/11223701982593441719noreply@blogger.com0tag:blogger.com,1999:blog-3598514451084757312.post-53714029199654692692012-07-22T20:46:00.000-04:002012-07-22T20:47:37.496-04:00Why I turned off Firefox's inline autocompleteFirefox's location bar inline autocomplete is, for me, the single most annoying aspect of Firefox 14. Here's a list of why I'm annoyed:<br />
<ol>
<li>Pressing tab navigates to the first dropped down suggestion. It has no relation whatsoever to the inline suggestion, contrary to appearances, and doesn't let me edit the URL before navigating to it. </li>
<li>Pressing backspace removes the inline suggestion, but doesn't erase an actual character that I've typed. This throws off <a href="http://en.wikipedia.org/wiki/touch_typing">touch typing</a>. </li>
<li>When I'm typing a Google search, if Firefox thinks I made a typo, it adds ">>" followed by the "correct" search terms. I've had false positives here, and pressing enter searches <i>the whole thing</i>, which I find confusing. Maybe there's a "right" way to use this feature, but if that's the case, then in my opinion it's poorly <a href="http://en.wikipedia.org/wiki/affordance">afforded</a>. Or maybe Mozilla really thinks people want to search for both correct and incorrect terms. I don't know.</li>
<li>Pressing enter goes to the autocompleted URL or search terms, instead of whatever I actually typed. Since I often type rather quickly, this frequently results in searches for the wrong keywords.</li>
<li>If I want to edit the autocompleted URL (e.g. to add additional fragments to it), I have to press the right arrow. This takes me off the home row. It is intuitive to use tab for this purpose, but that does something entirely different (see point 1).</li>
<li>Last I checked, I wasn't able to find any addons on AMO to address any of these issues. It's as if the new autocomplete fell out of the sky one day. The only addons I was able to find were all about adding inline autocomplete to older versions of Firefox.</li>
</ol>
I like Firefox, and use it as my primary browser. But I see this as a major blemish, and I went to the trouble of disabling it completely. Here's how:<br />
<ol>
<li>Go to <span style="font-family: "Courier New",Courier,monospace;">about:config</span> in the location bar (Blogger refuses to link to it, unfortunately). </li>
<li>If Firefox warns you, click through the warning.</li>
<li>Type <span style="font-family: "Courier New",Courier,monospace;">browser.urlbar.autoFill</span> in the Search box.</li>
<li>Double click the first result.</li>
<li>You're done! No need to restart the browser or anything.</li>
</ol>
<br style="font-family: "Courier New",Courier,monospace;" />Anonymoushttp://www.blogger.com/profile/11223701982593441719noreply@blogger.com1tag:blogger.com,1999:blog-3598514451084757312.post-75016909064880417812012-07-21T17:07:00.000-04:002012-07-21T17:07:33.379-04:00SpaceChem optimization<a href="http://www.spacechemthegame.com/">SpaceChem</a> is a programming video game disguised as a chemistry video game. A static description cannot really do it justice, so here's a trailer:
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<iframe allowfullscreen='allowfullscreen' webkitallowfullscreen='webkitallowfullscreen' mozallowfullscreen='mozallowfullscreen' width='320' height='266' src='https://www.youtube.com/embed/Gk8JwvtVs38?feature=player_embedded' frameborder='0'></iframe></div>
<br />
Honestly, the trailer doesn't do it justice either, so maybe you should go grab the demo from <a href="http://store.steampowered.com/app/92800/">Steam</a>. Anyway, SpaceChem has two types of puzzles: Research and Pipeline puzzles. Research puzzles are fairly straightforward to work with because an input/output operation always takes exactly one cycle, whereas with a Pipeline puzzle, the <strike>tubes</strike> pipes can be clogged or empty, making your I/O operations block. This means that performance problems with one reactor can affect others. But optimization is not an all-or-nothing affair. You need to know where to concentrate your work. That's what this post is about.<br />
<br />
<a name='more'></a>
<h2>Start at the end</h2>
You should start your analysis at the ends of the pipeline, that is, the reactors feeding into outputs. If you never see waiting(α) or waiting(β) on <i>those</i> reactors, then they are the bottlenecks. This is fairly straightforward: the speed of those reactors is your ultimate goal, and if nothing external is slowing them down, the only speed gains to be realized will be within those reactors.<br />
<br />
On the other hand, if those reactors are waiting, look into what they're waiting on. Keep track of which reactors you care about. You should only care about reactors that directly or indirectly cause the end reactors to wait.<br />
<h2>
Trace in both directions</h2>
There are two types of waiting in SpaceChem: waiting on input and waiting on output. End reactors generally don't wait on output, and beginning reactors don't wait on input. If a reactor you're interested in is waiting on input, look at what's producing the input. If it's waiting on output, look at what's consuming the output. If both pipes are waiting on output, you probably shouldn't care about this reactor since it's producing faster than it's being consumed.<br />
<h2>
Identify bottlenecks</h2>
Follow the trail of waiting until you find a reactor that doesn't seem to wait very much, but often blocks other reactors. This reactor is your bottleneck. Until you improve it, none of your other optimizations will have much effect.<br />
<br />
If you can't find a bottleneck easily, just focus on the end reactors until one emerges.Anonymoushttp://www.blogger.com/profile/11223701982593441719noreply@blogger.com0tag:blogger.com,1999:blog-3598514451084757312.post-13311541567143086612012-07-19T15:13:00.000-04:002012-07-19T15:13:08.242-04:00Do we even need copyright?As we established last time, copyright is an <a href="http://principiaconcordia.blogspot.com/2012/05/copyright-is-subsidy.html">indirect subsidy</a>. The purpose of this is to incentivize creativity, the idea being that art for its own sake is important, but won't pay the bills on its own. But art has a nice property: people like it, especially if it's original and well-executed. As there's both a supply and a demand for original, creative expression, I must wonder whether the market can connect the two without the use of a subsidy.<br />
<br />
<a name='more'></a>Historically, creative works were paid for by <a href="http://en.wikipedia.org/wiki/Patronage#Arts">patronage</a>. You either commissioned works on a case-by-case basis, or became the patron of an artist. Either way, the work was payed for before it was even made. There was no concern about whether something would sell or not; either someone was paying for it, or they weren't. This gave artists relative creative freedom compared to today. But there are a few problems with this approach:<br />
<ul>
<li>Most patrons are rich. Most artists will cater to their patrons. So most art will be targeted at the rich, to the detriment of the public.</li>
<li>Many rich people these days see other causes as more important than art, and spend their money elsewhere. I'm not making a value judgement either way about this, but obviously it harms the patronage system.</li>
<li>It is impractical at scale as it is wholly dependent upon a relatively small segment of the market.</li>
</ul>
There are also a few major problems with the copyright system:<br />
<ul>
<li>It grants a monopoly to the artist, which is economically troubling.</li>
<li>It places <a href="http://en.wikipedia.org/wiki/Fair_use">fair use</a> in a <i>de facto</i> legal gray area due to the lack of hard-and-fast rules.</li>
<ul>
<li>Even if there were rules, I doubt the average person would be familiar with them.</li>
</ul>
<li>If I own something, I ought to be able to do whatever I please with it. Copyright breaks this assumption. In the US, <a href="http://en.wikipedia.org/wiki/First_sale">first sale</a> mitigates this but does not eliminate it.</li>
<li>It places artists in the position of people selling products. But that's not what artists are good at, so they delegate this to businesspeople, creating middlemen.</li>
</ul>
The important thing to remember here is that our goal is to <i>fund creative works</i>, and not to protect notional "rights" of authors. So how else are works funded? Well, <a href="http://www.kickstarter.com/">Kickstarter</a> is focused more on getting creative projects started than on continually funding them, but it's still an interesting model. For those unfamiliar, here's how it works:<br />
<ol>
<li>A creative person comes up with an idea. It could be a product, book, movie, or any number of other works. Creativity is not always at the heart of the idea, but it often is involved.</li>
<li>The creator figures out how much revenue in preorders they would need to launch the idea (start manufacturing product, start shooting film, etc).</li>
<li>The creator starts a Kickstarter project with that amount of money as the goal. (S)he writes a pitch for the project and sets "rewards" for various different levels of contribution. So you might get a movie poster for a small donation, and a nice boxed copy of the movie for a larger donation.</li>
<li>People pledge money to the project until it reaches its deadline.</li>
<li>If the project fails to meet its goal, no one is charged.</li>
</ol>
This process is not intended to be a one-off fling for the creator. It's supposed to help the creator launch a self-sustaining business. However, I don't see why it couldn't be adapted to pay for one-off creative works. What if we had an open-source Kickstarter? It would work like this:<br />
<ol>
<li>Follow steps 1-5 above. Make sure you're making something ordinarily subject to copyright.</li>
<li>If the project succeeds, release the work under a <a href="http://freedomdefined.org/">free</a> license, such as <a href="http://creativecommons.org/licenses/by-sa/3.0/">CC-BY-SA</a>, or if you're feeling really generous, <a href="http://creativecommons.org/publicdomain/zero/1.0/">CC-0</a>. Mention this in the initial pitch.</li>
</ol>
But if we had something like this, anyone could fund a creative work just by using it, and it would largely or entirely bypass the copyright system. Since anyone using Kickstarter can do this, right now, it seems to me that copyright is superfluous to this process (yes, licenses like CC-BY-SA are dependent on copyright, but look at the actual requirements of that license: you need to attribute (which is basic courtesy anyway) and you need to <a href="http://en.wikipedia.org/wiki/copyleft">share alike</a>, which doesn't apply in a post-copyright world). So maybe we should consider getting rid of it entirely. <br />
<ul>
</ul>Anonymoushttp://www.blogger.com/profile/11223701982593441719noreply@blogger.com0tag:blogger.com,1999:blog-3598514451084757312.post-73089157701045222542012-07-07T14:46:00.002-04:002012-07-07T14:47:37.338-04:00The worst-case scenario<blockquote class="tr_bq">
A lot of the lessons from Fukushima have been obvious for a while.
Nuclear safety is a global challenge, and every country has to learn
from the best practices of others. These best practices include
retrofitting passive safety features<b> wherever possible</b>, and continuing
to update safety measures in response to our changing understanding of
the plant's environment. -- <a href="http://arstechnica.com/science/2012/07/fukushima-a-disaster-made-in-japan/"><i>Ars Technica</i></a> (emphasis added)</blockquote>
This advice certainly sounds right, to me. But when I <a href="http://lesswrong.com/lw/b4f/sotw_check_consequentialism/">check consequentialism</a>, I find that the words "whenever possible" give me pause. On the one hand, it seems that our only question should be "What effect does a given safety measure have on actual safety?" But proper consequentialism requires a two-sided analysis, so we must also ask "How much will this cost in relation to the <a href="http://en.wikipedia.org/wiki/Expected_value">expected number</a> of lives saved? <a href="http://lesswrong.com/lw/1yf/the_price_of_life/">Is it worth it</a>?" <br />
<br />
I don't like this conclusion.<br />
<br />
It certainly seems logical enough: if we're to make a change, we must do a proper cost-benefit analysis. But imagine this hypothetical: you're provided with an investing opportunity. The expected value is better than any other investment you're likely to find, and you're sure that it is legitimate. However, it's a risky investment, with a chance of returning nothing at all, not even the money you put in. I don't think any sane person would invest <i>all</i> of their money in such a scheme, and I don't think this falls under irrational risk aversion either. Yet that is exactly what basic economics (which is very similar to utilitarianism) says we ought to do, perhaps leaving some money out for basic living expenses, but certainly without a safety net. If someone did invest all their money like that, and ended up bankrupt, they would have only themself to blame.<br />
<br />
Of course, most real financial analysts would never do this, instead preferring to diversify. But I can't understand how you get from "throw all your money at the best expected value" to "diversify", and I think it's just a kludge to make the system work right. <br />
<br />
What can we distil from this? I think it's clear that we need to consider the worst-case scenario in any evaluation. But which one? A situation can always be worse. So, to keep this reasonable, we need to limit ourselves to outcomes which are related to the proposed action: those outcomes whose probabilities are significantly affected by our proposed action. Of those outcomes, we should focus on the very worst. We need to ask ourselves "Is this outcome tolerable? What effect does our decision have on its likelihood?"<br />
<br />
But what of utilitarianism? This doesn't seem compatible with it. I feel that we may revise utilitarianism if we find it necessary; I may explain why in a later post.Anonymoushttp://www.blogger.com/profile/11223701982593441719noreply@blogger.com0tag:blogger.com,1999:blog-3598514451084757312.post-77382876791748924552012-05-14T23:55:00.000-04:002012-05-16T18:45:04.485-04:00Is Fanfiction Legal?<i>If you're expecting practical advice, please look elsewhere. This is intended as a high-level discussion of copyright law </i>in general,<i> not a practical advisory. If you need legal advice (for example, because someone sent you a DMCA takedown notice), I recommend contacting the <a href="http://eff.org/">Electronic Frontier Foundation</a>, as they sometimes offer </i>pro bono <i>assistance; even if they won't take your case, they may be able to refer you to someone who will. Please don't act on anything in this post without consulting a lawyer.</i><br />
<i><br />This is based on US law and may be inaccurate elsewhere. </i><br />
<i> </i> <br />
<a href="http://en.wikipedia.org/wiki/Fanfiction">Fanfiction</a> is fiction involving the characters, setting, or other aspects of existing fiction, and written without the original author's permission. Many authors approve of this practice, but quite a few others are strongly opposed to it. Worse, authors may selectively enforce their copyrights as much as they please; if, say, Alice writes something, and Bob and Carol each make some fanfiction of it, Alice has every right to ignore Bob and <i>only</i> sue Carol, perhaps because she likes Bob's writing better or she thinks Carol is a hack. This is the case even if Alice previously said she <i>likes</i> fanfiction and doesn't mind people making it.<br />
<br />
For the sake of argument, let's just stick with this <a href="http://en.wikipedia.org/wiki/Alice_and_Bob">Alice/Bob/Carol</a> example. So suppose Alice actually <i>does</i> sue Carol. What happens? Well, it depends. Basically, Alice will need to allege one of the following, depending on the circumstances:<br />
<ol>
</ol>
<ul>
<li>Unauthorized <b>distribution</b> of the original (if the fanfiction is quite similar to the original)</li>
<li>Unauthorized preparation of a <b>derivative work</b> of the original (if it is).</li>
</ul>
<ol>
</ol>
The former is not very likely to work, unless the fanfiction contains large amounts of Alice's original text. Still, it could happen, if perhaps Carol quoted a lot of Alice's text for some reason. But it's not really relevant to our discussion, since most fanfiction doesn't have that.<br />
<br />
<a href="http://en.wikipedia.org/wiki/Derivative_work">Derivative works</a> are works "based upon one or more pre-existing works." The "based upon" language suggests, to me, that a derivative work would not exist but for the original. You'll note that the statute does <i>not</i> say "incorporating" or "equivalent to". In fact, if the supposed derivative is "equivalent" to the pre-existing work, then it's not really a derivative at all; it's basically a copy of the original. Furthermore, Carol's derivative need not include <i>any</i> of Alice's text to be a derivative work.<br />
<br />
So what are some of the defenses Carol might raise? Well, here are a few that I thought of:<br />
<ul>
<li><b>Fair use</b> is complicated and will be discussed below</li>
<li><a href="http://en.wikipedia.org/wiki/Estoppel"><b>Estoppel</b></a> if for instance Alice said something like "everyone should feel free to create and distribute fanfiction of my work." This seems like a long-shot to me; I'm not aware of any examples of this applied to copyright law, but Carol would probably <a href="http://en.wikipedia.org/wiki/Alternative_pleading">assert it anyway</a> since it can't hurt her. Furthermore, I don't think this would work unless Alice's statement was worded like a real copyright license, as opposed to a general statement of support such as "I think fanfiction is a good thing," or even "I'm flattered when people make fanfiction of my work."</li>
<li><b>Innocent infringement</b>, if Alice's publisher somehow forgot to include a copyright notice. This would likely force Carol to take down her fanfiction, but protect her from damages. I don't think any publishers are likely to forget the copyright notice any time soon, so this is largely hypothetical.</li>
<li>Bare non-infringement, predicated on the <a href="http://en.wikipedia.org/wiki/Idea-expression_divide"><b>idea-expression divide</b></a>. Carol may admit that she copied certain <i>ideas</i> from Alice's work, but insist that she never copied any protected <i>expressions</i> of those ideas. I have no idea whether this would work, since to the best of my knowledge it's never been tried. On the one hand, this is often associated with attempts to copyright bare facts, as in <i><a href="http://en.wikipedia.org/wiki/Feist_v._Rural">Feist v. Rural</a></i>, which Alice is not doing. But on the other hand, <a href="http://en.wikipedia.org/wiki/Baker_v._Selden"><i>Baker v. Selden</i></a> had little to do with facts and more to do with patentable ideas versus copyrightable expressions. The terms <i>idea</i> and <i>expression</i> don't have terribly clear definitions, in my opinion.</li>
</ul>
On to fair use. There are four components:<br />
<ol>
<li>The purpose and character of the use. Carol would probably win this since she's being creative and (presumably) isn't making money from it. But if Carol's work is too similar in story structure or wording to Alice's work, Carol might well lose this one.</li>
<li>The nature of the copyrighted work. Alice would likely win this part, since her work is (presumably) fictional and creative rather than factual and academic.</li>
<li>The amount and substantiality of the portion used in relation to the copyrighted work as a whole. This is very difficult to assess since Carol didn't copy explicit passages from Alice's work. Note that this is explicitly <i>not</i> "just a word count"; the "in relation" language ensures that courts consider the <i>importance</i> of the copied portion, rather than its absolute size.</li>
<li>The effect of the use upon the potential market for or value of the copyrighted work. This is going to be negligible or positive, so Carol will probably win this one.</li>
</ol>
The above are not evenly weighted, and there is no simple algorithm for figuring out who won, even if you know who won each part. But in general, the fourth part is the most important, although the other parts are still quite relevant. The first part is also fairly important, especially for derivative work cases.<br />
<br />
Personally, I'd really prefer to see Carol win, and perhaps that's distorted the outcome above. So I'm not going to say who wins, especially since a court might disagree with me about the four outcomes above. But, to be honest, I think that if Carol won parts 1 and 4, she'd probably win the whole case.<br />
<br />
Finally, I never got an opportunity to link it, but Wikipedia has a nice article about <a href="http://en.wikipedia.org/wiki/Legal_issues_with_fan_fiction">legal issues with fanfiction</a>, which also includes trademark law (and a few other things besides), which I never got around to discussing. In short, "<a href="http://tvtropes.org/pmwiki/pmwiki.php/Main/IDoNotOwn">I do not own</a>" is potentially effective for trademarks, though not for copyright.Anonymoushttp://www.blogger.com/profile/11223701982593441719noreply@blogger.com0