Categories
tech

New Handhelds: Open vs. Proprietary

At this year’s MacWorld, Apple didn’t announce any new computer products — unless you count the iPhone. In fact, Apple has delayed the release of its latest OS update, “Leopard”, until October in order to devote resources to the iPhone. It’s more than a phone — it’s a general-purpose handheld computer. To my eyes, the phone function is a bonus.

As usual, Apple is ahead of the curve. Yesterday, Intel announced its partnership with several other hardware manufacturers to produce a $500 handheld. It will run Linux as well as Windows Vista, which actually makes it a more open platform than the iPhone, for which Apple has announced no plans to release the API.

Since the market failure of the Apple Newton, we’ve been stuck in a world of low-powered, non-standard-OS-running, clunky-interface PDAs. With the huge popularity of “smart phones”, there’s a new willingness to pursue the handheld format. Display technology has also come a long way. And I, for one, have been literally waiting since the Newton.

A few years ago, Duke U. gave out iPods to its entire freshman class. For this year’s freshmen, that’s probably redundant since iPods are more popular than beer. All kidding aside, as popular as laptops are, they don’t get carried everywhere. Cellphones do, but they are locked down and designed so as not to function as general-purpose computers. Handhelds offer extreme integration of computers into daily experience.

College campuses will be the laboratories of this new technology’s cultural impact. We’re not committed to productivity per se; college students will find the fun uses — as well as the innovative workflows that those “on the clock” wouldn’t think to try. One lesson of Web 2.0 is that you don’t design a social environment — you give everyone access and if the product is cool, some of the thousands or millions of users will contribute to it, leverage it, improve it, and turn it into something great.

Of course, you don’t want to compromise the original functionality by allowing remixes. The question of how open to be is very much live right now. See MySpace vs. embedded media widgets, or Alexa vs. Alexaholic. As Wired’s Eliot Van Buskirk says in the MySpace article,

Its closest competitor, Facebook, has unannounced (but confirmed) plans to open its site to third-party widgets for the first time. Ultimately, the two sites could come to resemble each other, but which will users prefer? Surely, the one that’s more open and transparent. That approach has prevailed over and over on the web.

Will the public continue to vote with their clicks for the open web model? Probably. Will software and hardware makers draw that analogy to their products? Eventually. I believe that, barring anti-competitive manipulation (e.g. misuse of copyright and patent law), the open model will prevail. But man, the iPhone looks cool…

Categories
tech

RIAA vs. Colleges Update

As I’ve mentioned before, recently the RIAA has increasingly targeted college and university students for lawsuits. In the past, most people who received notices settled out of court, but the winds may be changing. Ars Technica has been covering the story, including the latest development: North Carolina State University has refused to provide students’ names to the RIAA. Now the RIAA must subpoena the ISP, i.e. NSCU, for the names.

It’s too early to say what will happen here, but certainly the RIAA is no longer getting the cooperation, nor the knee-jerk settlements, it’s used to. Stay tuned.

Categories
tech

Yale Access to Knowledge Conference

Boing Boing mentions Yale’s upcoming Access to Knowledge conference (April 27-29) addressing policy issues raised by new IT developments. Remote participation via the A2K Wiki is encouraged.

We now have the ability to easily share knowledge with everyone in the world. I have talked a bit about the problematic transition from a closed information ecosystem to an open model — the most pressing issue in college IT. We’re looking for ways to preserve the expertise that academia has accumulated and which, to a large extent, has been encoded in professional culture.

Ironically, the principle of free access to knowledge, and the practices to support it, have only developed in closed-access institutions. The project now is to decode those practices into explicit policies, and put our money where our mouth is. Naturally, these new policies run against the grain of some of the protectionist policies of the closed model.

Especially with respect to the law, e.g. “intellectual property”, where educational institutions have special status, we need to make sure that leveling the playing field means increasing protections for the public rather than decreasing protections for educational institutions. A similar reevaluation is going on in many different areas: do bloggers deserve the same First Amendment protections as professional/institutional journalists? (See EFF’s Bloggers’ Rights resources.) Do publishers have the right to control all copying of their work? (See Lawrence Lessig’s Free Culture.)

In each case, a deal was struck at some point in the past that gave rights to a limited group of people. Now that the tools are available to all, we have to revisit that deal and see whether the limitations on the group were a key factor in striking the balance or simply a historical accident. We probably do need to expand our concept of a free press to include bloggers. As with other First Amendment rights, the more speech the better. Copyright, on the other hand, probably should not be extended to cover the majority of use of creative works. Historically, non-commercial use was generally unregulated; the absolute power of publishers over their work was limited by its scope.

New technology has shifted the balance in a wide range of areas, and now we need to renegotiate the policy deals. The A2K Wiki provides a good overview of these areas and some policy directions.

Categories
tech

WIPO Broadcast Treaty — Your Chance to Speak

Boing Boing points out the upcoming public hearing on May 9 about the WIPO Broadcasting Treaty.

The Broadcast Treaty is a proposal to let broadcasters (and “webcasters” — people who host files and make them available to the Internet) claim a copyright to the stuff that they transmit. Broadcasters get this special right even if the stuff they’re sending around is in the public domain, or Creative Commons licensed, or not copyrightable (like CSPAN’s broadcasts of Congress). Fair use doesn’t apply to this right.

Seems ridiculous, right? Weigh in!

Categories
tech

Student PC Privacy & US v. Heckenkamp

The 9th Circuit Court of Appeals has ruled on a case involving a university network admin’s search of a student’s personal computer. Inside Higher Ed covers the story.

University of Wisconsin student Jerome Heckenkamp pled guilty to federal charges arising from unauthorized access to Qualcomm’s private network. The Court found that the FBI did not need a warrant to search the student’s computer in this case. On the other hand, it held that in general students have a reasonable expectation of privacy on their personal computers, which happened to be outweighed in this case.

Higher ed media are calling this a win for privacy [Chronicle of Higher Ed]. Inside Higher Ed summarizes:

It was legitimate for the university to act as it did, the judges found, because it was acting out of concern about its own e-mail network, not to help with the law enforcement investigation set off by Qualcomm, and it acted in ways that were consistent with the university’s policies that Heckenkamp had agreed to follow.

IANAL, but I have to disagree with the conclusion that the Court was legitimizing the university’s conduct. Instead, the question was whether the FBI’s use of the network admin’s findings was legitimate. In the absence of a warrant, the FBI asserted — and was granted — a “special needs exception” to the warrant requirement.

The Court found that “requiring a warrant to investigate potential misuse of the university’s computer network would disrupt the operation of the university and the network that it relies upon in order to function.” [p. 11 of the decision] I’m not sure I buy that either, but the point is that, within this case, the university would be off the hook for the violation of privacy even if the FBI couldn’t use the results of that violation to prosecute the student.

And it only stands to reason that a university network admin, who is not a law enforcement officer, should not be held to the same 4th-Amendment standards as the FBI. So I would hesitate to draw campus network policy conclusions based on this decision — aside from that it’s probably safe to cooperate with the FBI.

Categories
tech

Blackboard's Dangerous Patent

EFF lawyer Jason Schultz says Learning Management System vendor Blackboard’s broad patent for e-learning gives it too much power over educational institutions. Ars Technica covers the story.

The patent, which covers an “Internet-based education support system and methods,” could potentially threaten increasingly popular open-source course management platforms like Moodle and subject universities to the risk of litigation. … Although Blackboard has publicly pledged not to enforce its patent against open-source software distributors, universities, or non-commercial entities, there are many gray areas that make it difficult to guess what is permissible and what is not. For instance, Schultz points out that the pledge allows Blackboard to sue proprietary software vendors that incorporate open-source software components into their offerings.

This situation parallels the Novell – Microsoft patent non-enforcement agreement I previously discussed. Even in the best of contingent circumstances, when the patent holders pledge good behavior, users live under the coercive effects of this contingency. Colleges and universities are uniquely dedicated to freedom of information, and we need tools that fit with that dedication.

The whole point of the patent system is to bring proprietary inventions and novel methods into the public domain by legitimizing the inventor’s right to license the technology. Software patents allow the inventor to set license terms that are incompatible with free software. Charging fees was once no impediment to patent licensing because the only licensees would be commercial entities. With software, there is no inherent need to recoup costs commercially — whence free software — so charging fees now reduces public access to patented technology. Patent holders are also allowed to restrict the modification of their licensed technology, which is incompatible with free software.

I am not claiming that colleges and universities must use only free software. But we do need the freedom to choose between free and commercial models as they affect our sometimes conflicting educational objectives — e.g. freedom of information vs. access to robust tools. It’s hard enough finding an LMS that does what we need at an affordable cost (whether commercial software, paid support for free software, or employees to roll our own) without the additional anticompetitive force of potential patent litigation.

Categories
tech

Social Search

Found a neat summary of social search by Arnaud Fischer at searchengineland.com. Web technology has gone through a few distinct phases. First (early-mid 1990s) was just digitizing and hyperlinking information, making its interconnectedness literal. Second, Google (1998) revolutionized search; you no longer need to know where information is in order to get it. But, as I’ve previously posted, there are benefits to cataloging information rather than just sifting through an undifferentiated mess. It seems that any algorithm that is less complex than an intelligent agent is, in addition to being less effective at finding good results, susceptible to manipulation.

Throughout the past decade, a search engine’s most critical success factors – relevance, comprehensiveness, performance, freshness, and ease of use – have remained fairly stable. Relevance is more subjective than ever and must take into consideration the holistic search experience one user at a time. Inferring each user’s intent from a mere 2.1 search terms remains at the core of the relevance challenge.

Social search addresses relevance head-on. After “on-the-page” and “off-the-page” criteria, web connectivity and link authority, relevance is now increasingly augmented by implicit and explicit user behaviors, social networks and communities.

Attempts to literally harness human judgement to do the work of a cataloging engine (see the Mechanical Turk) don’t scale to internet proportions. What we need is a way to collect social information without imposing a burden on users. Some sites have succeeded in providing a platform where users freely contribute linked content , e.g. WikiPedia, and some have further gotten their users to add cataloging information — e.g. YouTube‘s tags. Visualizations like tag clouds make these implementations easier to use, but no deeper. And they still require intentional effort, and therefore goodwill and trust — two of the least scalable human resources.

I fundamentally agree with Fischer’s conclusion that using self-organizing groups to generate consensus is a much better way to measure relevance. The big question is how to balance the needs of social data collection with the freedom of association that it depends on. The public, machine-readable nature of most web forums amplifies any chilling effect into a snowstorm of suppression. Further, when a web service becomes popular, there is a strong temptation to monetize that popularity with ads, sponsored content, selling user information to marketers, etc. That background probably skews the opinions expressed in that forum, and by extension, the extractable metadata.

But even more fundamentally, there is something different about web discourse that makes participants espouse opinions they normally wouldn’t. Try researching software on a users’ forum. Many seemingly sincere opinions are based not on experience but are sort of meta-opinions as to the consensus answer. In fact I bet most posters really are sincere — I have caught myself doing this. This reification is what actually creates consensus, and having posted on the accepted side of an issue recursively increases a poster’s reputation. There is no check on this tendency because we lack the normal social status indicators online. I would bet that posters regress toward the mean opinion much faster than in offline discourse. Any social scientists out there want to test this out?

Speaking of which, there are already many empirically supported social biases which affect our judgements and opinions. Are we ready for Search by Stereotype? The tyranny of the majority and the tragedy of the commons are as dangerous to the freedom of (digital) association as government suppression or market influences.

The web as it currently stands, the non-semantic web, is largely non-judgemental in its handling of information. Divergent opinions have equal opportunity to be tagged, linked, and accepted. This non-function is actually a feature. Before we trust online consensus to generate relevance measurements for our social search engines, we need to understand and try to compensate for its biases.

Categories
tech

EMI Records Goes DRM-Free

EMI Records will begin selling songs on iTunes without DRM [Ars Technica]. Other content on the iTunes Music Store uses formats such as .m4p that make copying difficult (not impossible). EMI tracks will now be available in freely copyable .m4a AAC format.

Of course they’re still charging for downloads; in fact these new tracks cost more — $1.29 vs. 99¢. Apple and EMI are emphasizing that the tracks are encoded at a higher bitrate (256kbps vs. 128) as an explanation for the price increase. From my experience, that difference in quality is quite noticeable and may well be worth it for some listeners. I can still hear distortion in that format, so I buy physical CDs and rip them. I’ll stick with Apple’s Lossless encoder which sounds great and also has no DRM.

It’s really nice to see Steve Jobs put his .m4as where his mouth is. We’ll see if the doom-sayers are right and people stop paying for downloads now that they are freely copyable. My guess is no: people are willing to pay a little for the convenience of legitimate downloads, and maybe a little more for the convenience of being able to play them in other devices besides their iPod and their iTunes-authorized computers.

I would also guess that EMI will not stand by to find out whether p2p networks are flooded with these songs — many labels hire someone to obfuscate [Ars].

Categories
tech

Students Sue Turnitin Anti-cheating Service

Emily D. alerted me to this Washington Post story on high school students suing anti-cheating service Turnitin. Turnitin compares submitted student papers with a repository of other submitted papers to check for plagiarism.

I’m inclined to agree with the legal expert quoted in the article, that while using student work for scholarly purposes is probably fair use, “it seems like Turnitin is a commercial use. They turn around and sell this service, and it’s expensive. And the service only works because they get these papers.”

Preventing other people from running a business based on your work is one of the main benefits of copyright, and one of the rights most often asserted even when other copyright privileges are waived, as in a Creative Commons Non-Commercial license.

Turnitin may qualify for the DMCA’s safe harbor provision for service providers, since users submit the work. What that means is that the burden of proof of infringing use is on the author (i.e. the student). So even if the two students in this suit win, Turnitin may not have to shut down, at least in the short run.

It strikes me that the real place to push is the teachers, who are requiring their students to prove they’re not cheating. That runs against the grain of academic ethics ideology, which balances strict standards with academic freedom. This arrangement would also likely have a chilling effect on student work. There are only so many ways to write an essay on “The Catcher in the Rye.” Depending on the accuracy of the Turnitin algorithm, it may be impossible for a high school student to write a sufficiently distinct paper without purposefully trying to avoid a false positive.

That means that in addition to understanding the topic and the assignment, students would need to try and reverse-engineer the plagiarism detector. Failing that, they would then have to prove they didn’t cheat, which is nearly impossible. Forcing students to submit their papers to such a service imposes an unknown and possibly undue burden on academic work.

Categories
tech

Service Providers Covered by Federal, not State IP Laws

The NY Times reports on the 9th Circuit’s decision Thursday ruling that internet service providers, because their hosted content is available across state lines, are subject to federal law. The adult site “Perfect 10” was protected from prosecution for violating state laws “such as right of publicity and trademark statutes,” and instead covered by the federal Communications Decency Act and the DMCA.

The EFF has a good explanation of the legal background. “The Ninth Circuit decision clarified a number of factors of the DMCA safe harbor, importantly noting that ‘[t]he DMCA notification procedures place the burden of policing copyright infringement–identifying the potentially infringing material and adequately documenting infringement–squarely on the owners of the copyright.'” This is probably good news for web hosts and fair use on the web. But of course this decision is subject to further appeal.