Categories
tech

Convenience is Overrated

I ran into a fascinating blog post by an 18-year-old who’s going back and playing video games older than him:  http://gamesolderthanme.blogspot.com/2010/09/contra.html.

“Now compare this to any modern game and tell me if you have played anything even remotely this difficult recently. You know why? Because today’s generation of gamers don’t like to die in our games. We don’t like the difficult games that make us practice levels again and again until we finally get it right, just to go die at the next level and have to start all over…

“And after retrying for about an hour, I finally was able to beat the first level and I felt this odd feeling. It was a sense of accomplishment that washed over me and, to my surprise, made me keep playing this game that I kept dying in…”

You should read the whole article, but the game he played, Contra, was not actually that hard.  Although you could ‘cheat’ and get 30 lives, it was entirely possible for someone with my moderate skill to beat the game with the standard 3 lives.  Today’s gamers are happy to be challenged by complex and demanding games.  As I recently found out playing Call of Duty 4, the  precise timing and strategic decision-making required in modern games are equal to or greater than what we needed to beat Contra.


The issue is not really one of difficulty: dying after getting shot once is just too inconvenient for the kids these days. But that sense of accomplishment at having put a game’s logic into your muscle memory – which takes days no matter how you slice it – was in the center of the gaming experience back in Contra’s day.  Dying all the time was part of the frustrating journey from incompetence to competence.  That journey hasn’t changed – even new “casual games” are satisfying to the extent that you figure them out.  The Trophies for Everyone approach doesn’t actually increase your self-esteem, nor should it.

Making the game more convenient is an attempt to make your unavoidable incompetence seem less unpleasant.  But incompetence is its own punishment, and competence is its own reward.  Anyone who beat Contra will tell you the ending credit sequence wasn’t what made it worthwhile… and as the credits end, you continue playing from the start of the game, from a narrative perspective having accomplished nothing!  In its laughable disdain for the trappings of accomplishment, Contra actually sets up an environment where you can feel competent.  If I can’t feel bad at something, why would I bother getting good?  I want to practice!  In the end, of course, this is a useless competence to have, but then, nobody gets a swelled head about being good at Contra.

Categories
tech

Copyright Reform Bad for GPL/Open Source?

Ars Technica has a piece about Sweden’s Pirate Party pushing for copyright reform. Counterintuitively, this may weaken alternative licensing schemes (“copyleft”) such as the GPL. The GPL relies on strong copyright law to enforce its stipulation that derivative works also carry the GPL (“share-alike”), which keeps open source projects from going proprietary.

The Pirate Party’s plan, which proposes five-year copyright terms, would make it unnecessary for companies to conform with copyleft licensing requirements only five years after the code is published. This effectively guts copyleft as a vehicle for encouraging broader code disclosure and makes copyleft licenses such as the GPL behave more like permissive licenses.

If, after five years, you can do whatever you want with copyrighted/copylefted code, then you’re not bound by the GPL. Not only would free software be fueling proprietary projects without any code in return, but developers of proprietary code would be less likely to help develop the free version and more likely to just wait out the five years and simply take the code. As Richard Stallman notes:

Once the Swedish Pirate Party had announced its platform, free software developers noticed this effect and began proposing a special rule for free software: to make copyright last longer for free software, so that it can continue to be copylefted. This explicit exception for free software would counterbalance the effective exception for proprietary software. Even ten years ought to be enough, I think. However, the proposal met with resistance from the Pirate Party’s leaders, who objected to the idea of a longer copyright for a special case.

I understand libertarian inclinations, but this is a case where a stricter law actually leads to more freedom. Pirate party supporters, please give Stallman’s argument some serious thought.

Categories
tech

Pirate Bay Trial: Copying vs. Making Available

As the Pirate Bay Trial gets under way, prosecutors have dropped the charges that the popular BitTorrent tracker site made copies of copyrighted material.  The remaining charges assert that the Pirate Bay made copyrighted material available.  At the heart of the case is whether keeping track of links to other sites constitutes “making available.”  Pirate Bay doesn’t host these files, but points to other BitTorrent trackers and ultimately to other users who have parts of the file on their computers.

The Pirate Bay’s servers don’t host the material, but the Motion Picture Association of America said its operators have profited “by enabling the illegal distribution of audio-visual and other creative works on a vast scale.” [Information Week]

I’ve posted before about the futility of trying to enforce a ban on linking (see, e.g., DeCSS, the DVD decryption app).  We have Google.  A file can be deleted and reappear elsewhere on the net without any problem finding it.  You may as well ban Google for making available the Pirate Bay.

It’s really a technicality whether the torrent file (which describes the movie/mp3/other file) resides at the Pirate Bay, or on another site which is indexed by the Pirate Bay, or in fact is distributed over the same peer network across which the files travel:

Categories
tech

Confluence

I got tired of updating a thousand different profiles and pointing people to various different sites just to share an idea or link or schedule.  So I’m centralizing my various stuff here on elijacobowitz.com.

This site now includes the information about my yoga teaching that was posted on yogaeli.blogspot.com.

I slapped up a small Google Cal of my yoga teaching schedule… it’s almost too small to be useful, but too big to fit into my layout smoothly.  Anyone have a suggestion for that?

Update:  just pulled in some of the content from my technology blog.  Some of it is news-oriented and some is more “big ideas”… I’ll try to integrate the stuff that fits.

Categories
tech

Web Widgets

Read/Write Web posts a good survey piece on web widgets. They’re mini-applications that add functionality to your site from another site.

Traditionally, if you needed a particular tool, you’d download it and run it on your PC. Then the web came along, and now you can edit images, cut video, and of course work on your documents and spreadsheets all within your web browser. Great! But all those functions are on different sites. What if you want to use some of those advanced functions on your own site (like your blog)?

Enter widgets. They allow your blog to call up functions (and possibly content) from another web site using standard web code. That’s what allows YouTube clips to appear in blog posts. But that’s just the beginning, as Read/Write Web explains.

At the other end of the spectrum from widgets is SaaS. Enterprise applications are now being delivered not in a shrink-wrapped box for you to install on your big local server, but in real time over the web. Of course you pay big bucks for this, but it can actually be cheaper than maintaining the software and local server. As these services mature, it makes more and more sense from an engineering perspective — why solve a problem every time it occurs when you can solve it once, centrally?

As computing power gets cheaper, it becomes more efficient for medium and large web apps to provide widget-like integration with users’ own sites. You probably wouldn’t want mission-critical data out on a free server (although a lot of people are putting sensitive files up on Google Apps). But what if you could invoke another site’s enterprise-level functionality, apply it to your local data, and mash it together on your web site?

Why would anyone give away such critical software? The same reason that sites give away widget functionality now: because user participation (and the resultant market share) is more valuable than license sales. Just as “You’re a Nobody Unless Your Name Googles Well,” your web app is a nobody unless users can access it freely, as in freedom and as in beer. (See my previous post; same concept, different context.)

While the paid SaaS model makes sense as a transition from the buying-software-in-a-box model, license fees and proprietary APIs only hinder the success of web services. We may end up with something that much more closely resembles YouTube when the widgets grow up.

Categories
tech

Downside of Ubiquitous Data

Ars Technica picks up on a paper by Harvard professor Viktor Mayer-Schönberger on the dystopian possibilities of ubiquitous data storage. He describes a digital panopticon:

If whatever we do can be held against us years later, if all our impulsive comments are preserved, they can easily be combined into a composite picture of ourselves. Afraid how our words and actions may be perceived years later and taken out of context, the lack of forgetting may prompt us speak less freely and openly.

Of course, the other side of that coin is that not only will comments be preserved, but so will their context! The exact opposite trend is already in motion: rather than trusting an edited, summarized version of a conversation (say, as presented by news media), savvy web surfers go to the original transcript and see the context for themselves.

Just as Photoshop makes average people into image retouchers, YouTube makes us video distributors, and Amazon makes us book reviewers, Google and other search tools make us journalists. We’re learning to evaluate and corroborate claims, to seek out primary sources and cite them.

If every stupid comment ever made is stored, then it should no longer be a scandal to find a stupid comment someone made. As usual, human values trail behind technology. But if you want a preview, ask an 18-year-old how they evaluate a peer’s old web posts. Those in glass houses forgive smudges.

Categories
tech

Mobile Linux Gears Up

As I recently posted, handheld computing is set to take a big step forward, and with the hardware finally becoming suitable, there is a big question: proprietary or open software? Ubuntu is gearing up to make that a real choice [BBC].

As this post on the Ubuntu listserv explains,

it is clear that new types of device – small, handheld, graphical tablets which are Internet-enabled are going to change the way we communicate and collaborate. These devices place new demands on open source software and require innovative graphical interfaces, improved power management and better responsiveness.

Intel, specifically, have announced a new low-power processor and chipset architecture which will be designed to allow full internet use on these mobile Internet devices.

Instead of limited-function services like web browsing over my cell phone — which is so expensive and clumsy that I never use it — we will have general-purpose and freely expandable computing in our hands. This is going to be big.

Categories
tech

Search Innovations

Read/Write Web posts:

There are an abundance of new search engines (100+ at last count ) – each pioneering some innovation in search technology. Here is a list of the top 17 innovations that, in our opinion, will prove disruptive in the future. These innovations are classified into four types: Query Pre-processing; Information Sources; Algorithm Improvement; Results Visualization and Post-processing.

Categories
tech

House Bill to Protect Bloggers as Journalists

Ars Technica highlights a new amendment to the Free Flow of Information Act of 2007 extending source-protection rights to bloggers. Rep. Rick Boucher (D-VA) has a good reputation among the freedom of information/open access crowd for siding with users. He also sponsored the Fair Use Act of 2007 which would protect libraries and other users of copyrighted materials.

And speaking of open access, a couple quick searches at OpenCongress show that both bills are still in committee. Lobbying time…

Categories
tech

Harvard Law Prof: "Protect Harvard from the RIAA"

Harvard Law School Professor Charles Nesson writes:

The RIAA has already requested that universities serve as conduits for more than 1,200 “pre-litigation letters.” Seeking to outsource its enforcement costs, the RIAA asks universities to point fingers at their students, to filter their Internet access, and to pass along notices of claimed copyright infringement.

But these responses distort the University’s educational mission. They impose financial and non-monetary costs, including compromised student privacy, limited access to genuine educational resources, and restricted opportunities for new creative expression.

With colleges and universities under increasing pressure from the record labels’ lobby, now is the time to push back. The educational mission is a more vital interest to our schools than collaboration with the entertainment industry to prop up their obsolete revenue model.

[via Slashdot]