Lines Matching refs:urllib
4 HOWTO Fetch Internet Resources Using The urllib Package
29 **urllib.request** is a Python module for fetching URLs
36 urllib.request supports fetching URLs for many "URL schemes" (identified by the string
45 not intended to be easy to read. This HOWTO aims to illustrate using *urllib*,
47 the :mod:`urllib.request` docs, but is supplementary to them.
53 The simplest way to use urllib.request is as follows::
55 import urllib.request
56 with urllib.request.urlopen('http://python.org/') as response:
65 import urllib.request
67 with urllib.request.urlopen('http://python.org/') as response:
74 Many uses of urllib will be that simple (note that instead of an 'http:' URL we
80 send responses. urllib.request mirrors this with a ``Request`` object which represents
87 import urllib.request
89 req = urllib.request.Request('http://www.voidspace.org.uk')
90 with urllib.request.urlopen(req) as response:
93 Note that urllib.request makes use of the same Request interface to handle all URL
96 req = urllib.request.Request('ftp://example.com/')
114 argument. The encoding is done using a function from the :mod:`urllib.parse`
117 import urllib.parse
118 import urllib.request
125 data = urllib.parse.urlencode(values)
127 req = urllib.request.Request(url, data)
128 with urllib.request.urlopen(req) as response:
136 If you do not pass the ``data`` argument, urllib uses a **GET** request. One
148 >>> import urllib.request
149 >>> import urllib.parse
154 >>> url_values = urllib.parse.urlencode(data)
159 >>> data = urllib.request.urlopen(full_url)
171 to different browsers [#]_. By default urllib identifies itself as
172 ``Python-urllib/x.y`` (where ``x`` and ``y`` are the major and minor version
174 e.g. ``Python-urllib/2.5``), which may confuse the site, or just plain
181 import urllib.parse
182 import urllib.request
191 data = urllib.parse.urlencode(values)
193 req = urllib.request.Request(url, data, headers)
194 with urllib.request.urlopen(req) as response:
211 The exception classes are exported from the :mod:`urllib.error` module.
223 >>> req = urllib.request.Request('http://www.pretend_server.org')
224 >>> try: urllib.request.urlopen(req)
225 ... except urllib.error.URLError as e:
238 a different URL, urllib will handle that for you). For those it can't handle,
329 geturl, and info, methods as returned by the ``urllib.response`` module::
331 >>> req = urllib.request.Request('http://www.python.org/fish.html')
333 ... urllib.request.urlopen(req)
334 ... except urllib.error.HTTPError as e:
357 from urllib.request import Request, urlopen
358 from urllib.error import URLError, HTTPError
382 from urllib.request import Request, urlopen
383 from urllib.error import URLError
403 :mod:`urllib.response`..
423 confusingly-named :class:`urllib.request.OpenerDirector`). Normally we have been using
494 password_mgr = urllib.request.HTTPPasswordMgrWithDefaultRealm()
501 handler = urllib.request.HTTPBasicAuthHandler(password_mgr)
504 opener = urllib.request.build_opener(handler)
510 # Now all calls to urllib.request.urlopen use our opener.
511 urllib.request.install_opener(opener)
534 **urllib** will auto-detect your proxy settings and use those. This is through
541 >>> proxy_support = urllib.request.ProxyHandler({})
542 >>> opener = urllib.request.build_opener(proxy_support)
543 >>> urllib.request.install_opener(opener)
547 Currently ``urllib.request`` *does not* support fetching of ``https`` locations
548 through a proxy. However, this can be enabled by extending urllib.request as
554 the documentation on :func:`~urllib.request.getproxies`.
560 The Python support for fetching resources from the web is layered. urllib uses
566 the socket timeout is not exposed at the http.client or urllib.request levels.
570 import urllib.request
576 # this call to urllib.request.urlopen now uses the default timeout
578 req = urllib.request.Request('http://www.voidspace.org.uk')
579 response = urllib.request.urlopen(req)
600 is set to use the proxy, which urllib picks up on. In order to test
601 scripts with a localhost server, I have to prevent urllib from using
603 .. [#] urllib opener for SSL proxy (CONNECT method): `ASPN Cookbook Recipe