The following urlcp
settings control resources used by
the fetch functions, such as time, size, memory and files:
maxframes
(integer)
Sets the maximum number of subsidiary frames, iframes, and/or
JavaScript pages (<SCRIPT SRC=...></SCRIPT>
to also fetch
for a document. The default is 5. Frames are only fetched if
getframes
is on, <IFRAME>
s only if getiframes
is on, and JavaScript pages are only fetched if javascript
and getscripts
are on.maxhdrsize
(integer)
The maximum size of headers allowed; when exceeded a "Max
headers size exceeded (truncated)" message is generated. The
default is 128K; -1 indicates no limit. It is rarely necessary to
increase this limit, as headers are small and mostly fixed in
size. Added in version 2.6.937800000 19990920; was part of
maxpgsize
in prior versions. In version 4.01.1023500000 20020607
and later the previous setting is returned.
Aka maxhdrsz
, maxheadersize
, maxheadersz
.maxconnidletime
(double)
Maximum idle time in seconds (unused time between requests) that a
connection may be opened before it is closed and not reused for
Keep-Alive. The default is 5. Added in version 5.00.1094074734 20040901.
Returns previous setting.maxconnlifetime
(double)
Maximum seconds of real time that a connection can be open and
still be re-used with Keep-Alive for future requests. Idle
connections that are older than this will be closed. Defaults to
600 (i.e. 10 minutes). Added in version 5.00.1096313795 20040927.
Returns previous setting.maxdomdepth
(integer)
Maximum depth of the DOM during text formatting. Exceeding this
depth will silently limit the element stack and prevent further
nested elements from being properly added to the DOM. This limit
prevents potentially degenerately nested (or parsed as nested)
pages from consuming more resources. -1 indicates no limit.
The default is 64. Added in version 8.01.1662670396 20220908.maxdownloadsize
(integer)
Sets the maximum download size (network transfer size) of a
response document body. -1 indicates no limit, and is the
default. Maximum download size is checked before content
and transfer encodings are decoded, i.e. it is a limit on network
transfer size, not final document size (which the maxpgsize
setting controls). A network transfer that exceeds
maxdownloadsize
will generate a "Max download size
exceeded (truncated)" error message, and the transfer will be
aborted. Added in version 5.01.1249039000 20090731. Aka
maxdownloadsz
.
Note that since encodings are decoded on-the-fly as the document
is downloaded, not only will exceeding maxdownloadsize
abort the transfer, but typically so will exceeding
maxpgsize
- though possibly at an earlier point, if a
compression encoding was used. This is why maxdownloadsize
can usually be left at its -1 (unlimited) default, and just
maxpgsize
set instead: the latter setting controls final
doc size, and indirectly sets an upper limit for network
transfer bandwidth. Thus both memory and network usage can be
limited with maxpgsize
.
maxidleconn
(integer)
Maximum number of idle connections to cache for future Keep-Alive
requests. Added in version 5.1, where the default is 2. Returns
previous setting.maxkeepaliverequests
or maxconnrequests
(integer)
Sets the maximum number of page requests to do on a single
Keep-Alive connection, before closing the connection and
re-opening a new one. -1 sets no limit. Added in version
4.04.1068090000 20031105, where the default was 0, and Keep-Alive
was only supported if <urlcp netmode sys>
was set. In
version 5.1, the default is 100, and Keep-Alive is supported for
normal <urlcp netmode int>
fetches as well. Setting 1 (or
0) turns off Keep-Alive, i.e. all requests send a
Connection: close header and only one request per connection
is used. Note that a value of 3 or more is needed for NTLM
authentication (Integrated Windows Authentication) to function.
Returns previous setting.
maxpgcachesz
or maxpagecachesz
or
maxpgcachesize
or maxpagecachesize
or
pagecachesz
(integer)
Maximum page cache size in bytes. Only indirect pages are cached
(e.g. frames, iframes, scripts), not directly <fetch>
ed
pages. Default is 5MB. Suffixes KB
, MB
, GB
may be given, e.g. "5MB
". Returns previous setting.
Added in version 5.00.1096313795 20040927.maxpgsize
(integer)
Sets the maximum page size that will be accepted for a
document. Documents that are longer will be truncated and
generate a "Max page size exceeded (truncated)" message.
The default is
100MB;
a setting of -1 indicates no limit.
In versions prior to 8, the default was 512KB.
In versions prior to 2.6.937800000 19990920, the default was
100KB, the size of any headers was included, and -1 was not
permitted. In version 4.01.1023500000 20020607 and later
the previous setting is returned. Aka maxpagesize
,
maxpagesz
, maxpgsz
.
The maximum page size is checked after all content and/or
transfer encodings (if any) are decoded; i.e. it controls the size
of the final document returned by <fetch>
. Since encodings
are decoded on-the-fly as the document is downloaded, reaching
maxpgsize
will also typically abort the network transfer as
well. See also the maxdownloadsize
setting for limiting
network document size directly (though setting maxpgsize
is
typically enough).
maxprotspacecachesz
or maxprotspacecachesize
(integer)
Maximum protection space cache size in bytes. The protection
space cache is used to determine which URLs are protected with
what user/pass credentials, so that later fetches to the same
space do not need to negotiate credentials (and waste
transactions). Defaults to 128KB. Suffixes KB
, MB
,
GB
may be given, e.g. "5MB
". Returns previous
setting. Added in version 5.1.maxprotspaceidletime
(integer)
Maximum idle time of a protection space in the cache, in seconds. After this amount of time in disuse, the protection space will be deleted, which means future fetches may have to re-negotiate credentials. Defaults to 3600 (i.e. 1 hour); -1 is unlimited. Returns previous setting. Added in version 7.06.1465935000 20160614.
maxprotspacelifetime
(integer)
Maximum lifetime of a protection space in the cache, in seconds. After this amount of time since its creation, the protection space will be deleted, which means future fetches may have to re-negotiate credentials. Defaults to -1 (unlimited). Returns previous setting. Added in version 5.1. Prior to version 7.06.1465935000 20160614, the default was 3600 (one hour).
maxredirs
(integer)
Sets the maximum number of redirects that will be followed per URL
fetch. Exceeding this limit generates the error "Too many
redirections (N) while fetching ..." and the fetch fails. The
limit may be 0 to disallow redirects altogether. The default
maxredirs
value is
20 (5 in Texis version 7 and earlier).
pacfetchretrydelay
(integer)
Sets the time in seconds before retrying a failed proxy
pacurl
fetch (here), if enabled.
Because all URLs fetched depend on the proxy auto-config script,
all <fetch>
es would keep attempting to re-fetch the same
PAC URL if it fails. Thus, to reduce the load on the PAC URL
server, the PAC script will not be re-fetched after error for
pacfetchretrydelay
seconds. The default is 10; a negative
value means infinite (never retry automatically). Changing the
pacurl
will clear the last-try timestamp and allow a
re-fetch to occur. Added in version 7.05.
proxyretrydelay
(integer)
Sets the time in seconds before retrying a proxy when other
proxies are available. When proxy auto-config is enabled
(here), it is possible that the
FindProxyForURL()
PAC function may return more than one
proxy for a given URL: these proxies are normally tried in the
order returned, until one succeeds. However, a "bad"
(e.g. unresponsive) proxy is flagged in the proxy cache: every
subsequent <fetch>
's FindProxyForURL()
-returned list
will be altered to have such bad proxies moved to the end of the
list. This deprecation lasts proxyretrydelay
seconds (or
until the proxy succeeds).
This allows successive <fetch>
es to dynamically adapt to
unresponsive proxies, even when the FindProxyForURL()
list
may be constant or unaware of the proxy's unresponsiveness. For
example, if FindProxyForURL()
always returns
"PROXY flaky.example.com; PROXY reliable.example.com",
once the flaky.example.com
proxy detectably fails, future
fetches will try reliable.example.com
first, for up to
proxyretrydelay
seconds, instead of waiting for
flaky.example.com
to fail first.
However, if FindProxyForURL()
always returns just
"PROXY flaky.example.com
", that proxy will alway be
tried, even after failure: there is no other proxy offered to try.
The proxyretrydelay
setting was added in version 7.05, and
defaults to 300. Negative values mean infinite, i.e. never
automatically retry a bad proxy (when others are offered). The
internal cache of bad proxies may be cleared with the
clearproxycache
option (here).
savedownloaddoc
(boolean)
Whether to save the network-transferred download doc, if it varies
from the final (after content/transfer encodings decoded)
document. The default is off to save memory, since the decoded
document is usually more useful. Added in version 5.01.1249203000
20090802.scriptmaxtimer
(integer)
Sets maximum time (in seconds) to run JavaScript timers (set by
setInterval()
and setTimeout()
. This represents a
compromise between the dynamic JavaScript environment and the
static return value of the fetch lib. The default is 3 seconds,
which is less than the scripttimeout
and allows some timers
to run, but doesn't wait indefinitely for an infinitely-recurring
setInterval()
. A value of -1 means no limit (but
scripttimeout
still applies). Returns previous value.
Added in version 4.03.1050609000 20030417. See also
scriptrealtimers
(here).scriptmem
(integer/size)
Controls how much memory (in bytes) to allow the JavaScript engine
to allocate when running JavaScript code. Exceeding this limit
may generate an error such as "JavaScript exceeded scriptmem
limit". This helps prevent erroneous JavaScript pages from
consuming all available memory, e.g. if there is an infinite
JavaScript loop. Standard memory size suffixes such as MB
or KB
may be appended to the integer value for clarity.
The default value is 20MB
. Note that a very low limit may
cause problems even for pages with no JavaScript, as some
JavaScript library objects must be allocated for every page;
a minimum value of several MB is recommended.
Returns previous setting. Added in version 4.01.1023500000
20020607.scriptgcthreshold
(integer/size/percentage)
Sets the threshold of scriptmem
usage that the JavaScript
engine should begin garbage collection. Can be a percentage
(e.g. the default of "75%"), or an absolute integer/size
(e.g. "15MB"). Added in version 7.06.1490209000 20170322.
scripttimeout
(integer)
Controls how much total time (in seconds) to allow JavaScript code
to execute on a page. Exceeding this limit will generate a "Timeout: JavaScript exceeded scripttimeout" message. This helps
prevent an infinite loop in JavaScript from consuming all CPU and
hanging the process. Note that this is a limit for the
total time consumed by a page's JavaScript, not per
<SCRIPT>
block. This timeout also applies after the
page has been fetched, so it need not be smaller than the page
timeout
. The default is 5 seconds. -1 indicates no limit.
Returns previous setting. Added in version 4.01.1023500000 20020607.
Do not confuse this setting with the Vortex script
timeout (here), nor the fetch timeout
(below).timeout
(integer)
Sets the per-fetch timeout, in seconds. A document fetch that
takes longer than the timeout is aborted, the data read so far (if
any) is returned, and an error message is issued (may be captured
via putmsg
, here). The default is 30
seconds. This timeout applies to nslookup
, and to
each URL fetched by fetch
and submit
, so a framed
document or one with <SCRIPT SRC=...>
links, redirects
etc. may take longer. Do not confuse this setting with the
Vortex script timeout (here), nor the
JavaScript timeout scripttimeout
(above).
writebuffersize
(integer)
Sets the initial buffer size to use for some writes, e.g. for
writing <submit TOFILE>
documents to disk, or decoding
content/transfer encodings. The default is 32KB. Some write
buffers may increase past this limit if needed. Added in version
5.01.1249039000 20090731. Returns previous buffer size.