<metapackage xmlns:os="http://opensuse.org/Standards/One_Click_Install" xmlns="http://opensuse.org/Standards/One_Click_Install">
  <group>
    <repositories>
      <repository recommended="true">
        <name>home:gcomes.obs:ring0</name>
        <summary></summary>
        <description></description>
        <url>https://download.opensuse.org/repositories/home:/gcomes.obs:/ring0/16.0/</url>
      </repository>
      <repository recommended="true">
        <name>openSUSE:Leap:16.0</name>
        <summary>openSUSE Leap 16.0 based on SLFO</summary>
        <description>Leap 16.0 based on SLES 16.0 (specifically SLFO:1.2)</description>
        <url>https://download.opensuse.org/distribution/leap/16.0/repo/oss/</url>
      </repository>
      <repository recommended="true">
        <name>openSUSE:Backports:SLE-16.0</name>
        <summary>Community packages for SLE-16.0</summary>
        <description>Community packages for SLE-16.0</description>
        <url>https://download.opensuse.org/repositories/openSUSE:/Backports:/SLE-16.0/standard/</url>
      </repository>
      <repository recommended="false">
        <name>SUSE:SLFO:1.2</name>
        <summary>SLFO 1.2 (the base for openSUSE 16.0 and SLES 16.0)</summary>
        <description></description>
        <url>https://download.opensuse.org/repositories/SUSE:/SLFO:/1.2/standard/</url>
      </repository>
    </repositories>
    <software>
      <item>
        <name>perl-WWW-RobotRules</name>
        <summary>database of robots.txt-derived permissions</summary>
        <description>This module parses _/robots.txt_ files as specified in &quot;A Standard for
Robot Exclusion&quot;, at &lt;http://www.robotstxt.org/wc/norobots.html&gt; Webmasters
can use the _/robots.txt_ file to forbid conforming robots from accessing
parts of their web site.

The parsed files are kept in a WWW::RobotRules object, and this object
provides methods to check if access to a given URL is prohibited. The same
WWW::RobotRules object can be used for one or more parsed _/robots.txt_
files on any number of hosts.

The following methods are provided:

* $rules = WWW::RobotRules-&gt;new($robot_name)

  This is the constructor for WWW::RobotRules objects. The first argument
  given to new() is the name of the robot.

* $rules-&gt;parse($robot_txt_url, $content, $fresh_until)

  The parse() method takes as arguments the URL that was used to retrieve
  the _/robots.txt_ file, and the contents of the file.

* $rules-&gt;allowed($uri)

  Returns TRUE if this robot is allowed to retrieve this URL.

* $rules-&gt;agent([$name])

  Get/set the agent name. NOTE: Changing the agent name will clear the
  robots.txt rules and expire times out of the cache.</description>
      </item>
    </software>
  </group>
</metapackage>
