search engine cache isn't copyright infringement
Some argue that search engines such are copyright violators because they scrawl, index and keep an archive of web sites. That copied archive -- or cache -- is, according to this argument, an unauthorized copy. Found via TechDirt, the Pennsylvania Eastern District Court held that a Web site operator's failure to deploy a robots.txt file containing instructions not to copy and cache Web site content gave rise to an implied license to index that site.
In Parker v. Yahoo!, Inc., 2008 U.S. Dist. LEXIS 74512 (E.D. Pa. Sep. 26, 2008), the court found that the plaintiff's acknowledgment that he deliberately chose not to deploy a robots.txt file on the site containing his work was conclusive on the issue of implied license. In so ruling the court followed Field v. Google, a similar copyright infringement action brought by an author who failed to deploy a robots.txt file and whose works were copied and cached by the Google search engine.
The court further ruled, though, that a nonexclusive implied license may be terminated. Parker may have terminated the implied license by the institution of the litigation, and he alleged that the search engines failed to remove copies of his works from their cache even after the litigation was instituted. If proved, "the continued use over Parker's objection might constitute direct infringement." That issue will likely be resolved at a later date.
For an analysis, see the New Media and Technology Law Blog.
The same plaintiff's earlier Parker v. Google, Inc., No. 06-3074 (3d Cir. July 10, 2007) is also a search engine copyright infringement case.
No comments:
Post a Comment