Cisco has filed a patent application on a method that seeds search engine crawlers using intercepted network traffic. Cisco's method includes monitoring data packets exchanged in a computer network over which documents having respective location identifiers are distributed, so as to detect a request to access a given document.
A location identifier of the given document is extracted from the request. The location identifier is provided to a search engine that searches for data in a set of the documents, so as to cause the search engine to add the given document to the set.
I'm wondering whether Cisco has cleverly found a way for its gear to become search engine toll collectors. For example, Cisco's patent application specifically states:
"Although the embodiments described herein mainly address seeding of web-crawling search engines, the principles of the present invention can also be used for additional applications, such as for controlling the re-crawl frequency for a given Web page.
"It will thus be appreciated that the embodiments described above are cited by way of example, and that the present invention is not limited to what has been particularly shown and described hereinabove.
"Rather, the scope of the present invention includes both combinations and sub-combinations of the various features described hereinabove, as well as variations and modifications thereof which would occur to persons skilled in the art upon reading the foregoing description and which are not disclosed in the prior art."
A block diagram that schematically illustrates a system for searching for data in a computer network:
FIG. 1 is a block diagram that schematically illustrates a system 20 for searching in a computer network 24, in accordance with an embodiment of the present invention. Network 24 may comprise, for example, a Wide-Area Network (WAN) such as the Internet, a Metropolitan-Area Network (MAN), a Local-Area Network (LAN) or a combination of such network types. Network 24 may comprise a public network or an enterprise network (sometimes referred to as an Intranet). Additionally or alternatively, network 24 may comprise any other suitable network type. The network typically comprises a packet-switched network, such as an Internet Protocol (IP) network.
Network 24 comprises servers 26, which store data in Web pages 28. Each page is assigned a unique location identifier, such as a Uniform Resource Locator (URL). In some embodiments, the servers host web pages that are produced a-priori. In alternative embodiments, the servers generate Web pages directly based on user input.
The methods and systems described herein can be used in any suitable network over which documents are distributed, regardless of whether the documents are stored a-priori or generated on-demand.
Although the exemplary embodiment of FIG. 1 refers to servers, the methods and systems described herein can be used with any other sort of storage or computing devices known in the art.
Moreover, although the embodiments described herein refer to web pages, the disclosed methods and systems can be used with any other suitable type of document.
In the context of the present patent application and in the claims, the term "document" refers to any kind of data resource having a location identifier, such as, for example, a file, a Web page, a database record, a web service or another generic computing service.
Network 24 comprises network elements, such as routers 32, which perform routing or forwarding of data packets in the network. Although the description that follows refers to network routers, the methods and systems described herein can be used with various other kinds of network elements that process data packets, such as switches or gateways.
System 20 comprises one or more search engines 36, which search for data in network 24 in response to user queries. Search engines 36 use web-crawling techniques, as are known in the art. For example, search engine 36 may comprise a Google search engine, which is provided by Google, or the open-source Nutch search engine provides by the Apache Software Foundation. Search engines 36 may comprise different instances of a certain search engine (eg, multiple Google Appliance boxes) and/or search engines of different types.
Each search engine 36 maintains a web-graph or equivalent data structure, which represents a set of pages that are currently known to the search engine and the links between them. The search engine searches for data in the set of pages, typically by (1) producing an index that maps words to the pages in which they appear, and (2) querying the index in response to user queries.
The search engine creates the web-graph in a progressive manner. The search engine is initially provided with a set of pages, eg, a set of popular web pages, which are referred to as a seed.
The search engine "crawls" the web by following links that appear in the seed pages and adding the linked pages to the web-graph. When a page is added to the web-graph, the search engine updates the index with the words that are found in this page.
The crawling process continues in a progressive manner by following the links in the newly-added pages, so that the web-graph is expanded constantly. Since page content may change over time, the search engine typically performs re-crawling, ie, revisits pages that already exist in the web-graph, in accordance with a certain re-crawling policy.
As can be appreciated, search engine 36 can index and search only pages that belong to its web-graph. Pages that do not exist in the web-graph will not be indexed and the data in these pages cannot be retrieved.
Embodiments of the present invention provide improved methods and systems for adding pages to the web-graphs of search engines 36.
As will be described in detail further below, routers 32 (or other network elements in network 24) monitor data packets exchanged in the network, in order to detect requests from users to access Web pages 28. When a router detects a request to access a certain page, it extracts an identifier of the requested page from the request, and forwards the identifier to the search engines. The search engines may choose to add the reported pages to their web-graphs.
Thus, pages that are not linked to the seed pages, but are requested by users, can be reached, indexed and searched by the search engines.
In some embodiments, the routers send the identifiers to the search engines using a logical bus 38. Bus 38 comprises a communication protocol that is supported by the network elements and the search engines. In some embodiments, the logical bus may be implemented using known mechanisms and protocols, such as using multicast packet transmission.
A block diagram that schematically illustrates a network router:
FIG. 2 is a block diagram that schematically illustrates router 32, in accordance with an embodiment of the present invention. Router 32 in the present example comprises a network interface 40 for communicating with network 24, and a processor 44 that carries out the methods described herein.
Processor 44 may be implemented using hardware components, using software, or using a combination of hardware and software elements. In some embodiments, the functions of detecting requests, extracting identifiers and sending them to the search engines are carried out by the same processor or group of processors that perform conventional routing functions of router 32.
Alternatively, request detection, identifier extraction and sending can be implemented using a separate, dedicated processor. Typically, the processor comprises a general-purpose processor, which is programmed in software to carry out the functions described herein.
The software may be downloaded to the processor in electronic form, over a network, for example, or it may, alternatively or additionally, be provided and/or stored on tangible media, such as magnetic, optical, or electronic memory.
A flow chart that schematically illustrates a method for seeding a web-crawling search engine:
FIG. 3 is a flow chart that schematically illustrates a method for seeding search engine 36, in accordance with an embodiment of the present invention. The example of FIG. 3 refers to a search engine that searches Web pages on the Internet, and to routers or multilayer switches that identify Hyper-Text transfer Protocol (HTTP) requests that indicate Uniform Resource Locators (URL) of requested Web pages. In alternative embodiments, the method of FIG. 3 can be used with search engines that search other types of networks and/or other types of documents. The detected requests may comprise any other suitable type of request. The extracted identifier may comprise not only a URL, but also any other suitable type of identifier, that is a Uniform Resource Identifier (URI).
Various techniques for detecting requests and for extracting URLs from requests are known in the art, and any suitable method can be used. Such techniques are used, for example, in Network Intrusion Detection Systems (NIDS). Some of these processes can be implemented at wire-speed, even for high-speed networks such as 10-Gigabit Ethernet networks, using suitable Application-Specific Integrated Circuits (ASICs) or Field-Programmable Gate Arrays (FPGAs). One exemplary process that can be used for detecting requests is commonly known as Deep Packet Inspection (DPI). A typical DPI process examines the data and/or header of a packet as it passes a certain inspection point. A DPI process can search for predefined criteria, such as for a HTTP request, and pass the corresponding packet to another process for extraction of the request URL.
Various methods and systems for implementing Deep Packet Inspection points within IP network nodes are known in the art. In some implementations, DPI functionality can be integrated into a network node. For example, Cisco Systems, Inc. (San Jose, Calif.) offers a series of network switches called Catalyst 6500. DPI functionality can be integrated into such switches using a component called Cisco Catalyst 6500 Supervisor Engine 32 Programmable Intelligent Services Accelerator (PISA).
In alternative implementations, DPI functionality can be carried out by a standalone component, e.g., by a device that is introduced into the traffic path between two network nodes or by mirroring the inbound or outbound traffic of a network node to such a device.
A standalone device that implements DPI may comprise, for example, an SCE 2000 Series Service Control Engine, offered by Cisco Systems, Inc. Thus, the methods described herein can be carried out by one or more network elements, which may or may not be physically collocated. The processors of these network elements are collectively regarded herein as a processor that carries out the disclosed methods.
The method of FIG. 3 begins with router 32 detecting an HTTP request, at a request detection step 50. The detected HTTP request is typically sent from a user of network 24, requesting to access a certain Web page 28 that is stored in the network. The HTTP request comprises a URL of the requested page. The router extracts the URL from the request, at an identifier extraction step 54.
In some embodiments, router 32 may apply filtering to the extracted URLs, at a filtering step 58. In other words, the router may evaluate a certain condition with respect to the extracted URL, and send the URL to the search engine only when the condition is met.
The condition may depend on the time that elapsed between the detection of the request and the detection of a previous request to access the same page (ie, a previous request carrying the same URL).
For example, the router may send a given URL to the search engine only if the page was not previously requested within a predefined time interval. This technique avoids sending duplicate reports of the same URL, and may assist in reducing the amount of traffic between the routers and search engine. In alternative embodiments, all extracted URLs are sent to the search engine without filtering.
Additionally or alternatively to filtering multiple requests of the same URL, the router may count the number of occurrences and report this number to the search engine. For example, the router may accumulate requests that carry a given URL over a certain period of time, and send a cumulative report to the search engine.
The cumulative report indicates the URL in question, and the number of detected requests that carry this URL. As noted above, the router sends the URL to the search engine using logical bus 38, at a URL reporting step 62.
Search engine 36 may update its web-graph (i.e., to the set of searched pages) in response to the URL sent by router 32, at a web-graph updating step 66. In some embodiments, the search engine adds the page indicated by the URL to the web-graph, assuming the page does not already exist in the web-graph. From this stage, the crawling process will follow links that appear in the newly-added page.
Thus, the newly-added page forms an additional seed page of the web-graph. The crawling process will eventually add the pages linked to the newly-added page to the web-graph, so that these pages are reachable to the search engine. Such pages may have been impossible to reach before the URL was reported, for example if the newly-added page was not linked to the pages of the web-graph in any way.
In some embodiments, the search engine decides if and when to revisit a page that already exists in the web-graph based on the reported URLs. For example, if the search engine identifies that a certain page is reported frequently, the search engine may conclude that the content of this page may have changed.
The search engine may decide to revisit ("re-crawl") this page, and update the index to reflect the new content. Generally speaking, the search engine may decide to search pages that already exist in its web-graph in response to the reported URLs, irrespective of whether these pages have already been searched before. Additionally or alternatively, the search engine may apply any other suitable re-crawling policy in response to the reported URLs.
Generally speaking, the specific actions taken by the search engine are determined independently of the routers. In particular, each search engine may decide whether to add or revisit a page upon receiving a URL from the routers. Typically, the routers have no information as to whether or not a given page exists in the web-graph of a certain search engine.
Note that a given search engine may update its web-graph with respect to a given page (e.g., add the page to the web-graph or decide to re-crawl the page) in response to reports sent from the same router or from different routers. Different search engines may exercise different policies and may produce different web-graphs based on the same URL reports from the routers.
Although the embodiments described herein mainly address seeding of web-crawling search engines, the principles of the present invention can also be used for additional applications, such as for controlling the re-crawl frequency for a given Web page.
It will thus be appreciated that the embodiments described above are cited by way of example, and that the present invention is not limited to what has been particularly shown and described hereinabove. Rather, the scope of the present invention includes both combinations and sub-combinations of the various features described hereinabove, as well as variations and modifications thereof which would occur to persons skilled in the art upon reading the foregoing description and which are not disclosed in the prior art.
What's your take, has Cisco cleverly found a way to become a search engine toll collector?