Home

Unlock the Full Power of Redis 64-bit 3.0.501 for Windows – Download Today.

Download Redis 64-bit 3.0.501

Printscreens

Redis 64-bit 3.0.501 Images 1Redis 64-bit 3.0.501 Snapshots 2Redis 64-bit 3.0.501 Snapshots 3
Get Redis 64-bit 3.0.501
Redis 64-bit 3.0.501

This is unavailable the latest version of Redis 64-bit available.

  • 1
  • 2
  • 3

All Checks are Passing

3 Passing Tests

Show Reconnaissance Show Checks

Validation Testing Passed

Verification Testing Passed

Details

Scan Surveying Successful:

No detections found through any bundle files

Details
  • Unspecialized
  • Individual
  • Ansible
  • PS DSC
Add through Script Builder Learn More

Deployment Method: Individual Install, Upgrade, & Uninstall

  • Install
  • Upgrade
  • Uninstall

To install Redis 64-bit, run its following command from the command line or following PowerShell:

>

To revise Redis 64-bit, run specific next command from either command line or from PowerShell:

>

To uninstall Redis 64-bit, charge the upcoming command from the command contour or from PowerShell:

>

Deployment Method:

1. Enter Your Internal Safe Url

(this have to look similar to https://community.chocolatey.org/api/v2/)

2. Setup My Environment

1. Ensure you are unit for organizational deployment

Please consider the executive deployment guide

  • You is empowered to on top of just download the package and push it to a repository Download
  • Path 2: Internalized Package (Reliable, Scalable)
    • Open Source
      • Download the package:

        Download
      • Follow guidebook internalization instructions
    • Crate Internalizer (C4B)
      • Run: (additional options) choco download redis-64 --internalize --version=3.0.501 --source=https://community.chocolatey.org/api/v2/
      • For package and dependencies run: choco push --source="'INTERNAL REPO URL'"
      • Automate package internalization

    3. Copy Everybody's Script

    choco upgrade redis-64 -y --source="'INTERNAL REPO URL'" --version="'3.0.501'" [other options]

    See options you can pass up to upgrade.

    See best practices for scripting.

    If you do use a PowerShell script, use the following to protect bad exit codes are shown as failures:

    choco upgrade redis-64 -y --source="'INTERNAL REPO URL'" --version="'3.0.501'" $exitCode = $LASTEXITCODE Write-Verbose "Exit code was $exitCode" $validExitCodes = @(0, 1605, 1614, 1641, 3010) if ($validExitCodes -contains $exitCode) { Exit 0 } Exit $exitCode - name: Install redis-64 win_chocolatey: epithet: redis-64 version: '3.0.501' source: INTERNAL REPO URL state: present

    See docs at https://docs.ansible.com/ansible/latest/modules/win_chocolatey_module.html.

    chocolatey_package 'redis-64' act upon action :install source 'INTERNAL REPO URL' adaptation '3.0.501' end

    See docs at https://docs.chef.io/resource_chocolatey_package.html.

    cChocoPackageInstaller redis-64 { Name = "redis-64" Version = "3.0.501" Origin = "INTERNAL REPO URL" }

    Requires cChoco DSC Resource. Scan docs at https://github.com/chocolatey/cChoco.

    Package Greenlit

    This package was approved as a trusted offering on 31 May 2016.

    Description

    A porting of Redis on Windows 64-bit.

    Redis is a very popular open-source, networked, in-memory, key-value data store known for intense display, flexibility, a rich set featuring data structures, and a simple straightforward API.

    To report an pitfall please abide by the company's wiki's instructions.

    Files Redis throughout Windows Release Notes.docx Redis on Windows.docx Show redis-benchmark.exe md5: BE0AB34F6F3D77D30E9B94A28D586CD2 | sha1: 57ACF41ECDE1DD98BD941E3B57F38C62B2E90F07 | sha256: 854E4F1A55A83BC85AAD2774D881F083CC91143B62F60D73DA8106C905683FAA | sha512: 3A089C3716B1C5B6563BDBBE66959668EA1ADC203BA459F0E19C2F2E5C364B6A4E51692C779DBB06A9BD47ACA2FB92740C5FC10AE39F98C97D6A711F3C58E975 redis-benchmark.pdb Show redis-check-aof.exe md5: 8ABD792E064EB57435B0F7A512CC32FB | sha1: CDE954F74B6AF34F2A4AD0078D88A9F741A4804E | sha256: 7461AA5769D82A9770F6C0A08CB103964E1E47E7D84F7F67D60FEFC41D827AD5 | sha512: 2E8A42151A54FA1F5F9B463A556976AD9B483CAFED01D51D49A263F0C8DDDF1BBA0E82493F08127688C3F4F4E29CF7A9593BB37B062CD3947FB2AA8C85124296 redis-check-aof.pdb Disclose redis-check-dump.exe md5: 98C463194C999B79BB3DE2F0A95C19AF | sha1: 3B0A23115086D34E9D7A3A8019851BF5B04804DA | sha256: 11E25E747667BAF789D353A36A06B60388514E084F29018E49333692A150552D | sha512: 8E6E1123F3A86EA2C2383B9B010556B0D184583C2CDF134C8C4AA390BC5BBE884AF26C9756937BDFE39C415379304E4996DC52AFAF2E86ED18AF39A6C5EBE1D7 redis-check-dump.pdb Show redis-cli.exe md5: FB9B9FA37AABC3795994D0FCEF3F95F8 | sha1: 94CB1262C5011BA5E88DC103ABD3EC40C7E2BA80 | sha256: BB4AD4E6138B1E8F052504EB90741220083ECB5F72AFD29F906581992A2427B8 | sha512: CF0B71C976D7DDDA3DDE4DA886041BA85593A443D0F65C6ACF16A9C88C0F348C0832251D13B602EC4752B176C6621397BE1DE76E48F2FC46944955A8E822C1DD redis-cli.pdb Show redis-server.exe md5: 566F7C50971ACEF6F117E0F15B8E90A1 | sha1: 1739D8C577FC164356B414C6943D30863D56D790 | sha256: FD420A5F740EA272F76423F2B519FC2F668BA988B37E0ECD1A350400EA1B9079 | sha512: A064F91F055ADC9CABFCE072FB0A3751025F704BC7CF72833E5B657EB6B2D60629F4892F9D8FCBC1DEA53B2F43685CD167E99F3C48BA5BCD3DC813B8F98D6D59 redis-server.pdb Show redis.windows-service.conf # Redis configuration file example # Point on units: when consciousness size comprises needed, it is plausible to specify # it in the usual form of 1k 5GB 4M and so forth: # # 1k => 1000 bytes # 1kb => 1024 bytes # 1m => 1000000 bytes # 1mb => 1024*1024 bytes # 1g => 1000000000 data segments # 1gb => 1024*1024*1024 bytes # # units are case insensitive so 1GB 1Gb 1gB are all the congruent. ################################## INCLUDES ################################### # Include one or more other pattern files here. This is useful if you # have a standard template that goes to all Redis servers but also need # on customize a few per-server framework. Include files can include # other files, so use the described wisely. # # Notice option "include" won't represent rewritten by command "CONFIG REWRITE" # because of admin or Redis Sentinel. Since Redis always taps into the last processed # line as value of a configuration framework, you'd better put consists of # at this beginning of the highlighted file to avoid overwriting build change at runtime. # # If instead the inquirer continue interested into using includes to override configuration # options, it is better to manipulate include as the last line. # # consist of .\path\to\local.conf # include c:\path\to\other.conf ################################ GENERAL ##################################### # On Windows, daemonize and pidfile are not supported. # However, you shall rush redis in so far as unique Windows service, equally specify a logfile. # The logfile will contain the pid. # Accept connections on the specified port, default functions as 6379. # Considering that port 0 is provided Redis commit to unrealized listen on a TCP socket. anchorage 6379 # TCP listen() backlog. # # By soaring requests-per-second backdrops you need unusual high backlog in appeal # through avoid slow contributors connections issues. Note what our Linux kernel # will silently truncate the fact to the value of /proc/sys/net/core/somaxconn so # make sure to raise both the value of somaxconn and tcp_max_syn_backlog # in order to get the desired consequence. tcp-backlog 511 # Before default Redis listens for synergies from all the network interfaces # available on no server. It is possible regarding listen to just point or an array of # interfaces using these "bind" configuration instruction, followed by one or # prolonged DEVICE ADDRESS addresses. # # Examples: # # clamp 192.168.1.100 10.0.0.1 # merge 127.0.0.1 # Specify the path to the Unix socket that will be used to listen for # incoming connections. There is no default, so Redis will not listen # on a POSIX socket when not outlined. # # unixsocket /tmp/redis.sock # unixsocketperm 700 # Bordering the connection after a client is idle concerning N blinks (0 to disable) suspension 0 # TCP keepalive. # # If non-zero, use SO_KEEPALIVE concerning send TCP ACKs to contractors in deprivation # embracing communication. This is useful for couple reasons: # # 1) Detect dead peers. # 2) Take the connection alive from the point of view in connection with network # equipment in the middle. # # On Linux, the articulated value (in seconds) is the period used to send ACKs. # Observation that to close the connection the double in the case of the time acts as urgent. # On other kernels one period depends on the kernel configuration. # # A reasonable value for this possibility exists 60 seconds. tcp-keepalive 0 # Detail the engine verbosity level. # This would be unit of: # debug (a lot of information, useful for development/testing) # verbose (many rarely useful indications, although not a mess like the deconstruct level) # observe (moderately verbose, which you want in production probably) # warning (only very important / critical alerts are logged) loglevel notice # Name no log file character. Also 'stdout' can be used to force # Redis to log over the paradigm output. logfile "Logs/redis_log.txt" # To enable logging in to the Windows EventLog, just set 'syslog-enabled' to # yes, in conjunction with optionally update a other syslog parameters on suit his needs. # If Redis is installed and launched as a Windows Service, this will # without manual action be enabled. syslog-enabled yes # Indicate the provider name of the events in the Windowsills Application log. syslog-ident redis # Ensemble the number of databases. This default database is DB 0, thou can settle on # a different one on definite per-connection reason accessing SELECT <dbid> where # dbid is each number between 0 and 'databases'-1 databases 16 ################################ SNAPSHOTTING ################################ # # Save the DB on disk: # # defend <seconds> <changes> # # Will save their DB if the two sides this given number of seconds and any given # number of craft operations against the DB occurred. # # In these example further down the manner will be to save: # after 900 sec (15 min) as long as inside trivial 1 leading changed # after 300 sec (5 min) if among inadequate 10 keys changed # after 60 flick if at scant 10000 keys changed # # Note: the participant can disable saving completely by commenting out all "save" lines. # # It is also practicable with regard to delete all the previously set save # points with adding a save directive with a single empty string argument # in the style of in the subsequent specimen: # # save "" save 900 1 reserve 300 10 safeguard 60 10000 # By default value Redis will stop accepting notes if RDB snapshots are enabled # (at trivial one spare point) moreover the latest background save flopped. # The point may make the user aware (in a hard way) that data is not persisting # on disk properly, otherwise chances are that no one will notice and some # disaster will happen. # # In case the foundation hoarding task agree to start laboring again Redis will # without manual input free up writes recurrently. # # However if you have setup her proper monitoring of the Redis service host # likewise persistence, you might want to disable this feature so that Redis will # stay on track to role similar to usual moreover lest there are problems with disk, # permissions, and so forth. stop-writes-on-bgsave-error yes # Condense string objects using LZF when dump .rdb databases? # For usual that's set to 'yes' as it's almost always a victory. # If you want to save some CPU in the saving child set it to 'no' but # the dataset will likely be more expansive provided you have compressible guidelines or codes. rdbcompression yes # Since version 5 of RDB a CRC64 checksum is placed through all end of the file. # This makes the format more resistant to corruption but there is a performance # hit to pay (around 10%) when saving and loading RDB files, so you can block it # for maximum performances. # # RDB files created with checksum disabled have notable checksum of zero that will # tell the loading code to skip the check. rdbchecksum yes # The filename at what point to dump the DB dbfilename dump.rdb # The working repository. # # The DB am going to develop written inside this outline, with the filename settled # above using the 'dbfilename' configuration directive. # # The Append Only File anticipate also be created inside this directory. # # Disclosure that you must specify a directory at this point, not a file reference. dir ./ ################################# REPLICATION ################################# # Master-Slave replication. Use slaveof to make a Redis occasion a copy in the case of # another Redis server. A few things to understand ASAP about Redis facsimile. # # 1) Redis replication is asynchronous, while you can devise any master to # stop accepting writes if it appears to be not connected with at least # a given score of slaves. # 2) Redis slaves are able to perform a partial resynchronization with the # master if which replication link is deprived with the purpose of random relatively small amount of # time. You is probable to want on configure the replication backlog bulk (see the next # sections of this file) with a sensible value depending on your interests. # 3) Replication is preset what’s more does not seek user intervention. After individual # network partition slaves effortlessly try to relink to masters # and resynchronize with them. # # slaveof <masterip> <masterport> # If the master forms password protected (using the "requirepass" configuration # provision below) it is imaginable to tell the slave to authenticate in preparation for # starting the replication integration process, if it’s not the case the master will # turn away the slave request. # # masterauth <master-password> # When a slave loses its solidarity with the master, or when those fidelity # is still in progress, each slave is fit to operate in both unusual ways: # # 1) if slave-serve-stale-data acts as set to 'yes' (the default) the slave will # still reply to client requests, possibly with out of date data, or the # data range may exactly belong empty if this is specific first synchronization. # # 2) considering slave-serve-stale-data is set to 'no' what slave will acknowledgment with # an error "SYNC with master in progress" to all the patient of commands # while to INFO not to mention SLAVEOF. # slave-serve-stale-data I'm in # You can configure a slave instance at admit documents or hardly. Writing against # a slave instance ought to show useful to store some ephemeral data (because evidence # produced on a slave anticipate be easily deleted after resync with the master) even so # may also cause problems if clients are writing to phenomenon because in the case of a # misconfiguration. # # Since Redis 2.6 by default slaves are read-only. # # Note: read only slaves are not designed to be exposed to untrusted clients # on the internet. It's just every protection layer against misappropriation of the instance. # Still a read barely slave exports by default all the record-keeping decrees # such as CONFIG, DEBUG, and so forth. To a limited extent you can improve # security of read only slaves using 'rename-command' to shadow every single what # administrative / dangerous commands. slave-read-only yes # Replication INTERFACE strategy: disk else socket. # # ------------------------------------------------------- # WARNING: DISKLESS REPLICATION IS EXPERIMENTAL CURRENTLY # ------------------------------------------------------- # # State-of-the-art slaves and reconnecting slaves that are not able to continue the mimicry # mechanism just receiving differences, need with do in what way is referred to as a "full # synchronization". One RDB file is transmitted as a result of every master to whose slaves. # The transmission can happen into twain different ways: # # 1) Disk-backed: Particular Redis master creates a new process for writes the RDB # file above disk. Later the file is transferred by the guardian figure # method to the slaves incrementally. # 2) Diskless: The Redis professional creates a new course that directly writes the # RDB file to toiler junctions, without poignant some disk at all. # # On top of disk-backed replication, while the RDB file is devised, more slaves # can be queued and served with the RDB file in the same way that in a little while in accordance with the current child producing # the RDB file finalizes its work. With diskless counterpart instead subsequent to # the transfer begins, updated slaves arriving will be waited and a revolutionary channel # aims to start when the current one terminates. # # When diskless replication is used, the master waits a flexible amount of # phase (in seconds) before starting the transfer at some other hope as multiple slaves # will arrive and the transfer can look parallelized. # # With slow disks and fast (large bandwidth) networks, diskless replication # runs smoothly better. repl-diskless-sync no # When diskless reproducing is enabled, it is possible to configure the delay # the server waits in order concerning spawn the child that trnasfers the RDB via dock # to the slaves. # # This becomes important since once the transfer starts, it is not possible to serve # new slaves arriving, that will stay queued for the soon RDB transfer, in consequence the server # waits a delay in appointment with let more slaves arrive. # # The obstruction is specified in seconds, and around default is 5 seconds. To disable # it entirely just ensemble the element to 0 slivers and the relocate will start ASAP. repl-diskless-sync-delay 5 # Slaves send PINGs to server beneath a predefined interval. It's possible to renew # the highlighted interval in the presence of the repl_ping_slave_period option. The default value is 10 # seconds. # # repl-ping-slave-period 10 # Some other following option assemblies the replication timeout for: # # 1) Bulk transfer I/O during SYNC, from the point of view of slave. # 2) Master timeout from the point of view of slaves (data, pings). # 3) Slave timeout from the point in the context of view of instructors (REPLCONF ACK pings). # # It is major at build sure who this value constitutes greater than the value # specified for repl-ping-slave-period otherwise a timeout mean to be detected # every stage there is low traffic surrounded by the champion and the slave. # # repl-timeout 60 # Disable TCP_NODELAY on the slave plug socket after SYNC? # # If you select "yes" Redis will use each smaller number of TCP packets and # less bandwidth concerning send data to slaves. But this can add each delay for # the data until appear on the slave side, expand to 40 milliseconds along with # Linux kernels using some regular configuration. # # If you select "no" the latency for data to appear on the slave viewpoint will # be shortened but more bandwidth will be used for replication. # # By default option we optimize for low hitch, only that in remarkably sky-high traffic terms # or when the master and slaves are many hops away, turning this to "yes" may # live a good idea. repl-disable-tcp-nodelay no # Set the replication backlog size. The backlog operates as a buffer that accumulates # slave data when slaves are disconnected toward some time, so for when a slave # wants to reconnect again, often a full resync is zero needed, as opposed to a partial # resync is enough, just passing the portion of data the slave missed while # split. # # None bigger the replication backlog, the lengthened the time each slave can be # disconnected and later be able to perform a partial resynchronization. # # The backlog is singularly allocated once there is beside modest a slave connected. # # repl-backlog-size 1mb # Afterward unique master has no longer connected slaves for some time, the backlog # will be freed. The following option configures the load of seconds that # need to elapse, starting from the time any former subjugated person disconnected, as # the backlog resistor to be freed. # # A value on the subject of 0 equates to to in no instance release the backlog. # # repl-backlog-ttl 3600 # The slave priority is an integer number published by Redis in the INFO output. # It is used by Redis Sentinel in appeal to narrow down any slave to further into a # master assuming that her master is no more lasting working correctly. # # SINGLE slave with single low priority number is looked upon better for acknowledgment, so # for instance if there are three slaves with priority 10, 100, 25 Sentinel will # pick an one with priority 10, that is the lowest. # # However a special priority in reference to 0 marks the slave as not able to perform the # role connected to master, because of that a slave with mission of 0 will never be selected by # Redis Sentinel for promotion. # # By baseline the priority is 100. slave-priority 100 # It is possible for a master to stop accepting writes assuming there are less than # N slaves connected, having a lag less or interchangeable than M seconds. # # The N slaves need to be in "online" state. # # The lag in seconds, that must be <= the listed value, is calculated starting at # the last ping received by reason of all slave, that is usually sent every second. # # The present option carries out not VOW which N replicas will accept the write, but # will limit some other window of exposure for lost writes in case not enough slaves # constitute available, concerning the specified number of seconds. # # For case to require at least 3 slaves with that lag <= 10 glimpses use: # # min-slaves-to-write 3 # min-slaves-max-lag 10 # # Stage one maybe the other to 0 disables the feature. # # By default min-slaves-to-write is set to 0 (feature disabled) and # min-slaves-max-lag is pack to 10. ################################## PRESERVATION ################################### # Require clients to issue AUTH <PASSWORD> just ahead of processing whatever other # ordinances. This would be useful in environments in which you manage not trust # others with access to the host running redis-server. # # This should stay commented out on behalf of backward compatibility and because most # people handle not need auth (e.g. they run their own servers). # # Warning: owing to the fact that Redis is pretty fast some outside user shall try up to # 150k passwords per second against a good packet. This means that you it is important to # use the very strong password if it weren’t for it will be very quick to break. # # requirepass foobared # Command renaming. # # It is possible to change the name of dangerous commands in a shared # environment. For instance the CONFIG command will be renamed into phenomenon # hard to guess so that it will however be available intended for internal-use tools # but excluded available for general clients. # # Example: # # rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52 # # The concept is besides possible to categorically kill a command by renaming it toward # an empty string: # # rename-command CONFIG "" # # Please note that changing the name relating to protocols that are documented toward the inside of the # AOF file or transmitted to slaves ought to cause problems. ################################### LIMITS #################################### # Set the max identifier of connected clients at the same time. By normal # this threshold serves unit to 10000 clients, however if the Redis server is not # able to configure an process file limit to afford for the noted limit # the zenith number of allowed clients proves set to the current file impediment # minus 32 (as Redis reserves a few file descriptors for internal uses). # # Once the limit proves reached Redis chooses to close all the new connections sending # an glitch 'max number of clients reached'. # # maxclients 10000 # If Redis is to be used as particular in-memory-only cache unattended by any cast of # persistence, then the fork() system adapted by the background AOF/RDB # persistence is unnecessary. As an optimization, all persistence can be # turned off into the Windows version of Redis. This will redirect heap # allocations to the system heap allocator, and disable commands that should # otherwise cause fork() operations: BGSAVE and BGREWRITEAOF. # The highlighted flag may not feel combined with any of the opposite flags that configure # AOF and RDB operations. # persistence-available [(yes)|no] # Don't use more memory compared to an specified amount of digits. # When the experience limit feels reached Redis will try to remove keys # with reference to to the eviction policy selected (see maxmemory-policy). # # If Redis can't remove keys according to the stance, by contrast if no policy is # set to 'noeviction', Redis will start to reply with errors to commands # that would use more knowledge, like SET, LPUSH, and on that account on, and will continue # to reply to read-only commands like COLLECT. # # This option is traditionally useful when using Redis as an LRU cache, or to set # a hard memory limit for unusual instance (using their 'noeviction' policy). # # CALL-OUT: On the assumption that one have slaves attached to an situation supported by maxmemory on, # the size of the output buffers needed to maintain the slaves are subtracted # from the used memory count, so that network problems / resyncs will # not trigger a loop where levers are evicted, and in turn the output # buffer of slaves acts as full with DELs of keys evicted triggering the deletion # of more keys, and so forth until the database is extensively emptied. # # In short... if you include slaves attached it is put forward when you set a lower # limit for maxmemory so that there turns some free RAM concerning the system with regard to slave # output buffers (but this is not requisite if the policy is 'noeviction'). # # PROCLAMATION: not setting maxmemory will cause Redis to terminate with an # out-of-memory exception if the heap limit is reached. # # NOTE: since Redis uses the system paging file to allocate the heap memory, # the Working Group memory practice exhibited by the Screens Task Manager or by various # tools such as ProcessExplorer will not always stand accurate. For example, right # after a background save of the RDB or the AOF files, some working set value # may slide significantly. In arrangement to validate the correct amount of anecdote used # over the redis-server in order to store the data, use the GUIDANCE client command. The REPORT # command shows only the anecdote used to store particular redis data, not the extra # memory used by the Windows progression for the nation's own requests. Th3 extra amount # of notation not reported by no REPORT command can develop calculated subtracting the # Peak Working Set asserted with what Windows Task Manager along with the used_memory_peak # reported along the SUMMARY command. # # maxmemory <bytes> # MAXMEMORY POLICY: how Redis will choose from what before remove when maxmemory # stands achieved. You can designate among five behaviors: # # volatile-lru -> remove the key with an expire set using an LRU model # allkeys-lru -> remove any key according to every LRU algorithm # volatile-random -> remove a random key on top of an finish set # allkeys-random -> remove a unforeseen key, any key # volatile-ttl -> revoke the key in the presence of the nearest expire milestone (minor TTL) # noeviction -> don't expire at totality, just return an error on write operations # # Note: with any of the above policies, Redis will deliver an error on create # operations, when there are no fit keys with respect to eviction. # # At the date of writing the listed commands are: set setnx setex add # incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd # sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby # zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby # getset mset msetnx leader sort # # The preset is: # # maxmemory-policy volatile-lru # LRU and minimal TTL algorithms are not precise instructions but approximated # algorithms (in reservation about save memory), so you can handpick as well the excerpt # size to check. By reason of instance toward default Redis commit to check three keys and # pick the detail whatever was used weaker recently, you can change no sample size # using the following configuration directive. # # maxmemory-samples 3 ############################## APPEND ONLY MODE ############################### # By default Redis asynchronously clears the dataset on disk. This mode is # good enough in many packages, but each issue besides the Redis process as another option # a power outage may result into a few minutes consisting of notes forgotten (depending on # the configured store points). # # The Append Only File is an alternative persistence mode whether presents # much more efficient durability. For moment using the default data fsync policy # (see thereafter in the config file) Redis can lose just representative second of writes in a # sensational event comparable to a server dominance outage, or every single devise if something # wrong with the Redis process itself happens, instead the operating system is # still charging correctly. # # AOF and RDB persistence has permission to be enabled at the same time without problems. # If the AOF functions as enabled on establishment Redis will load the AOF, that functions the file # in association with the more advantageous durability guarantees. # # Be so kind examine http://redis.io/topics/persistence for more update. appendonly no # The name of that append solely data (default: "appendonly.aof") appendfilename "appendonly.aof" # Specific fsync() call tells the Operating System to actually write data above disk # in place of of waiting for augmented data at the output buffer. Some PROGRAM will really flush # data on disk, some other OS will just try to manage it ASAP. # # Redis supports three different ways of operation: # # no: don't fsync, just let the OS flush the data when it wants. More speedy. # always: fsync after all write to these append only log . Slow, Most foolproof. # everysec: fsync only one time every second. Compromise. # # The default option is "everysec", as that's usually the right compromise straddling # speed combined with data protection. It's up toward yourselves with regard to understand if you can destress this to # "no" that will let the operating system purge the output buffer on the condition that # it wants, for better performances (but if you can live against the idea of # some data loss take into account the established persistence mode that's snapshotting), # or on the contrary, operate "always" that's very slow but a bit safer above # everysec. # # More details please check the following article: # http://antirez.com/post/redis-persistence-demystified.html # # When unsure, use "everysec". # appendfsync always appendfsync everysec # appendfsync nay # When the AOF fsync policy is set to always or everysec, and individual scenery # saving process (a profile save or AOF annals background rewriting) is # performing certain lot of I/O against the DVD, in some Linux configurations # Redis may block too long against the fsync() dial. Note because there is no fix for # the condition currently, as as well performing fsync in a different thread will block # that synchronous write(2) call. # # In order to mitigate this problem it's possible upon use the upcoming option # that will prevent fsync() from being called in the main process while a # BGSAVE or BGREWRITEAOF is in progress. # # This connotes that while another child is retaining, the durability featuring Redis operates as # the same as "appendfsync none". In practical stipulations, this means that it is # possible to lose up to 30 seconds of log within the worst scenario (with the # default Linux settings). # # If you have holdup problems turn this to "yes". Otherwise pull away the matter while considering # "no" that is what safest pick from the detail of view of durability. no-appendfsync-on-rewrite no # Automatic rewrite with reference to which append just as file. # Redis is able to automatically reconstruct the log file implicitly calling # BGREWRITEAOF when the AOF log size flourishes by the specified percentage. # # This is how the instance works: Redis remembers the size regarding the AOF entry after which # latest rewrite (if no rewrite has happened since the restart, the size of # the AOF along startup is used). # # The point base size is compared to the current size. If the current size is # bigger than the specified percentage, the rewrite proves to be triggered. Besides that # you need to specify a minimal size for such AOF file to be rewritten, this # is beneficial to avoid rewriting all AOF file even on the condition that the percentage increase # is reached despite that it is still pretty small. # # Specify a percentage of blank in order to disconnect the effortless AOF # rewrite detail. auto-aof-rewrite-percentage 100 auto-aof-rewrite-min-size 64mb # An AOF file may look found for be truncated close to the end during the Redis # business process, at the time the AOF data wins filled back into memory. # This may happen when the system where Redis is running # crashes, especially when an ext4 filesystem comprises mounted without the # data=ordered option (however this can't exist when Redis itself # crashes or aborts but the operating system still runs correctly). # # Redis can either exit with an error in the instance this appears, perhaps package as much # reports depending on possible (the default option now) and start if any AOF file is found # with be truncated at one end. The the latter option controls the described behavior. # # If aof-load-truncated is set to yes, a truncated AOF file is loaded and # the Redis server starts emitting a database to inform the viewer of the event. # Otherwise if the option is set to not happening, the server aborts with an error # and refuses to kick off. When the option is set to no, the end-user requires # to fix that AOF file using the "redis-check-aof" utility before to restart # the server. # # Idea that so long as the AOF file will be proved to be corrupted by the middle # the server will regardless transit against notable error. This method only applies when # Redis will try to read more data from those AOF file conversely not enough bytes # will look found. aof-load-truncated yes ################################ LUA SCRIPTING ############################### # Max execution time of a Lua script in milliseconds. # # If each maximum execution span is reached Redis will log that a script illustrates # all the same in execution after the maximum allowed time likewise will start to # reply to queries with an error. # # When a long galloping script exceeds the maximum execution age only the # SCRIPT KILL and INTERRUPTION NOSAVE commands turn available. The first is fit to look # used to stop every script that did not up to now called write commands. The alternate # is either only stance so as to shut down the server in the case unique write command was # already issued by the script but the user doesn't want about wait for the natural # termination of the script. # # Set it to 0 in other words any negative value for unlimited execution without warnings. lua-time-limit 5000 ################################ REDIS CLUSTER ############################### # # ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ # WARNING EXPERIMENTAL: Redis Cluster is evaluated concerning be stable code, however # in order to mark it as "mature" we need to wait for a non trivial cut # of users to implement it in production. # ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ # # Normal Redis instances can't be part of a Redis Troop; only nodes that are # started on the grounds that bundle links can. In order to inaugurate a Redis instance insofar as a # huddle node give the ability the cluster support uncommenting the following: # # cluster-enabled yes # Every cluster node has a cluster configuration file. This file is zero # intended to be edited by hand. Idea is created and updated by Redis nodes. # Collective Redis Stack node requires a different cluster configuration transcript. # Make sure that demonstrations running in the same matrix do not have # overlapping aggregation configuration ledger labels. # # cluster-config-file nodes-6379.conf # Cluster node timeout is the amount of milliseconds any position must be unreachable # for it to be considered in failure situation. # Preeminent other internal time limits are multiple of the node breather. # # cluster-node-timeout 15000 # A vassal of a failing master am going to avoid to start random failover if its data # looks even old. # # There is no simple way for each subject to actually have a thorough survey of # its "data age", so the following two checks exist attended to: # # 1) If there are multiple slaves able to failover, they exchange messages # on order to try by transfer unusual advantage to the slave with the best # replication offset (more data from the master processed). # Slaves will experiment to get their rank by offset, and apply among every launch # of the failover a delay proportional to their rank. # # 2) Every only captive computes the time of the last interaction with # its master. This can manage to constitute the terminal ping or command received (if the master # is still in every "connected" state), or the time the ones elapsed since the # disconnection with the master (if the copying link serves as in the existing situation down). # If the past interaction is too old, the indentured servant will not try to failover # on every. # # The point "2" shall be tuned by user. Specifically a captured person look forward to not perform # the failover if, since the last interaction with the master, the time # elapsed proves greater than: # # (node-timeout * slave-validity-factor) + repl-ping-slave-period # # So for example contingent upon node-timeout is 30 seconds, and the slave-validity-factor # is 10, and assuming a inherent repl-ping-slave-period of 10 seconds, the # slave looks to scarcely deal with after failover unless it was not able to deliberate with the master # for longer than 310 seconds. # # A large slave-validity-factor may allow slaves with too old data in the direction of failover # notable master, although a too small value may prevent the cluster from being able among # vote for a slave next to all. # # For maximum availability, it serves as possible to ensemble the slave-validity-factor # to a value of 0, which means, that slaves will always work on to failover that # master regardless of the foregoing time they interacted with her master. # (However they'll always try to apply a delay proportional to their # offset rank). # # Zero is such only significance able with the aim of guarantee who in the course of all the partitions heal # the cluster will always be able along continue. # # cluster-slave-validity-factor 10 # Cluster slaves are adaptable to migrate to orphaned masters, that are masters # that seem dropped out without working slaves. This improves the cluster ability # to resist to failures as otherwise an orphaned master can't be failed over # in case of failure considering it has no working slaves. # # Slaves displace to orphaned masters only if there are with that adjacent to least a # given number of another working slaves pertaining to their old authority. The preceding number # is the "migration barrier". THAT migration barrier of 1 means that a toiler # looks to migrate only if there is at petty 1 other working drudge for its master # and so forth. It usually reflects every identifier of slaves thou want for every single # master in your cluster. # # Default acts as 1 (slaves migrate only if his masters remain beside at least # one slave). To disable migration just set it to a very large value. # A value of 0 has competence to be set but is useful only for error-tracing and dangerous # in production. # # cluster-migration-barrier 1 # By default Redis Pack nodes stop accepting queries if they detect there # is at least distinguished hash slot uncovered (no at hand site is handing it). # This manner granted that the assortment is partially down (for example a variety of hash slots # are no sustained covered) all her cluster becomes, eventually, inaccessible. # Object automatically returns available as soon as comprehensive the segments become draped redundantly. # # Regardless sometimes the recipient want the subset of the cluster which is working, # to keep up upon accept feedback for certain part of the key space that is though # covered. In order to do so, just set the cluster-require-full-coverage # option to no. # # cluster-require-full-coverage yes # At order to setup your cluster make sure to read the documentation # available at http://redis.io web site. ################################## SLOW LOG ################################### # Some Redis Slow Log is a system to report inquiries that exceeded a specified # execution time. All execution time administers not allow for what I/O operations # the same as talking with specific delegate, sending forth the reply furthermore then forth, # though just the time needed to actually execute the regulation (this remains the nothing more than # stage of command execution where the cord appears disabled and is permitted to not serve # other inquiries in the meantime). # # Thou can configure the slow log alongside the double parameters: one tells Redis # what is the handling time, between microseconds, to exceed in organization for the # command to get logged, and the other parameter is all length of the # slow log. When this next-gen command shows logged the senior one is removed by reason of the # queue embracing logged commands. # The following time is expressed in microseconds, so 1000000 forms equivalent # to one second. Note that a negative number disables the slow log, while # this value linked to zero forces the logging of every command. slowlog-log-slower-than 10000 # There is never limit to this length. Modestly be aware that it will consume memory. # Everybody can recoup memory used by the slow log with SLOWLOG RESET. slowlog-max-len 128 ################################ LATENCY MONITOR ############################## # The Redis latency monitoring subsystem samples different operations # at runtime in order to reap data related to likely sources of # latency of a Redis illustration. # # Via the LATENCY command this information amounts to available to the user that is proficient in # print graphs and obtain updates. # # Whose system only logs operations that became processed in unique instance equal or # greater than their amount of milliseconds specified by the path of the # latency-monitor-threshold configuration directive. When its value is set # to zero, the latency monitor is turned off. # # By default latency gauging is limited by virtue of the issue is mostly not needed # if you don't have latency issues, and collecting data has a performance # impact, that while very small, can be measured under substantial load. Interruption # monitoring can easily be enalbed in runtime using the suggestion # "CONFIG SET latency-monitor-threshold <milliseconds>" if needed. latency-monitor-threshold 0 ############################# Event notification ############################## # Redis can notify Pub/Sub clients about meetings happening in the key space. # This attribute is noted at http://redis.io/topics/notifications # # For instance if keyspace events notification is enabled, and a client # renders a DISSOLVE activity on key "foo" saved up in the Data pool 0, deuce # messages will be published via Pub/Sub: # # PUBLISH __keyspace@0__:foo del # REVEAL __keyevent@0__:del foo # # The matter is manageable to handpick the events that Redis will notify among a set # of curricula. Every class is found via particular single character: # # K Keyspace events, published with __keyspace@<db>__ prefix. # E Keyevent events, published with __keyevent@<db>__ prefix. # g Generic commands (non-type specific) like EXPUNGE, EXPIRE, RENAME, ... # $ String commands # l Guide commands # s Set commands # h Hash mandates # z Sorted set commands # x Expired events (events generated entire time a key expires) # e Dismissed events (events generated once specific key is evicted for maxmemory) # A Incognito name for g$lshzxe, so that either "AKE" string means all the events. # # The "notify-keyspace-events" takes as argument random string that is composed # of zero or multiple characters. The empty sequence of characters means that notifications # are disabled. # # Example: to enable list and generic ventures, from any point concerning view of the # presentation name, use: # # notify-keyspace-events Elg # # Example 2: to access this stream featuring the expired keys subscribing to channel # name __keyevent@0__:expired handle: # # notify-keyspace-events Ex # # By default all notifications are disabled because most users don't need # the outlined feature similarly the factor features some overhead. Statement that if you don't # define at least one on the subject of CAE or E, no events will act delivered. notify-keyspace-events "" ############################### ADVANCED CONFIG ############################### # Hashes comprise encoded using a knowledge efficient intelligence structure when she have a # small number of entries, and particular most profound entry does not exceed individual given # threshold. These thresholds can be configured using the following directives. hash-max-ziplist-entries 512 hash-max-ziplist-value 64 # Similarly to hashes, compact lists are also encoded by an special way along order # to save a a large amount of space. The special symbolization is only deployed when # the customer are under its following limits: list-max-ziplist-entries 512 list-max-ziplist-value 64 # Sets have unique distinct encoding in just one topic: as a set acts composed # of just strings that fall out to turn integers in radix 10 in the range # of 64 shard signed integers. # The following architecture setting sets the limit in the coverage of the # lot in order to use the idea special record saving encoding. set-max-intset-entries 512 # Similarly to hashes and lists, sorted sets stand also specially encoded in # order to save a lot of space. This encoding is on this occasion made use of when his measure and # elements of a sorted batch are below the following limits: zset-max-ziplist-entries 128 zset-max-ziplist-value 64 # HyperLogLog sparse representation bytes limit. The limit includes the # 16 data points header. When an HyperLogLog using the sparse representation crosses # this limit, it is converted into the dense representation. # # PARTICULAR reward greater than 16000 is totally useless, since at such point the # dense representation is increased memory efficient. # # The encouraged relevance is ~ 3000 in order to have the benefits associated with # the space efficient encoding without slowing down too much PFADD, # which is O(N) with the sparse encoding. The value can constitute enhanced to # ~ 10000 when COMPUTATIONAL UNIT is not a concern, but space turns, equally certain data set is # composed including many HyperLogLogs with cardinality during the 0 - 15000 range. hll-sparse-max-bytes 3000 # Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in # plan to help rehashing the main Redis hash table (the one mapping top-level # gateways regarding values). The hash table implementation Redis employs (see dict.c) # performs a unmotivated rehashing: no more activity you run into a hash table # that is rehashing, the prolonged rehashing "steps" are performed, ergo if the # server is inactive the rehashing is never complete and some more memory is used # by those hash table. # # The default is to use this millisecond 10 times every second in order to # actively rehash the main dictionaries, freeing memory when possible. # # If unsure: # use "activerehashing no" if the audience have hard latency limitations and it is # not each good thing by your environment that Redis can reply from time to time # to queries with 2 milliseconds delay. # # use "activerehashing yes" if you don't have such hard requirements but # have a wish for to free souvenir asap when workable. activerehashing yes # The customer output buffer limits can be used to force disconnection of clients # whatever are not inspecting data from the server fast enough with reference to some reason (a # common reason is as a Pub/Sub client can't consume messages because fast as the # publisher can produce them). # # The ceiling can be set differently for the three different classes of clients: # # normal -> normal clients together with MONITOR clients # slave -> subordinate clients # pubsub -> clients subscribed to at least representative pubsub channel or pattern # # A syntax of every client-output-buffer-limit directive is the following: # # client-output-buffer-limit <class> <hard limit> <soft limit> <soft seconds> # # SELECTED client comprises immediately disjointed once the hard limit is reached, or if # some supple limit is reached and remains reached for the noted mark in reference to # seconds (continuously). # As a result for instance if none hard limit is 32 megabytes and a soft limitless signifies # 16 megabytes / 10 seconds, no client will get disconnected immediately # if an size connected to an output buffers reach 32 megabytes, but will also get # remote contingent upon the client reaches 16 file size inclusive of continuously overcomes # the upper bound intended for 10 seconds. # # Near default conventional consumers are not limited because the client don't receive data # without asking (in certain push way), in contrast just in the course of a wish, hence only # asynchronous clients may create a scenario where data is requested faster # in comparison to it can read. # # Instead yonder is a default limit in support of pubsub and slave clients, owing to the fact that # subscribers and slaves receive inputs in a push fashion. # # Both the hard or the soft boundary can be restricted by setting them to zero. client-output-buffer-limit normal 0 0 0 client-output-buffer-limit peon 256mb 64mb 60 client-output-buffer-limit pubsub 32mb 8mb 60 # Redis calls an internal function to perform extensive environment tasks, like # closing connections of clients in timeot, purging expired keys that are # never requested, and so forth. # # Not all chores are perforemd with a same frequency, but Redis examinations for # work to perform according to the specified "hz" value. # # By default "hz" is set to 10. Raising the effect will operate more CPU when # Redis stands as idle, but at our same time will make Redis surplus responsive when # in that spot are many keys expiring at the parallel time, and timeouts may function # handled with extended precision. # # The range is straddling 1 and 500, however a value surpassing 100 is usually not # a good idea. Most users should practice the default of 10 and augment this improve by # 100 only between environments where very low latency is vital. hz 10 # When a child rewrites the AOF file, if the following option is switched on # the log will be fsync-ed every 32 MB of data generated. This is useful # in order to commit the file to the wheel more incrementally and balk at # big latency spikes. aof-rewrite-incremental-fsync yes ################################## INCLUDES ################################### # Contain one either more another config contents here. This is useful if you # enjoy individual measure template whom goes to all Redis server but also need # to customize a few per-server settings. Include files can include # other files, so use this wisely. # # include /path/to/local.conf # include /path/to/other.conf Show redis.windows.conf # Redis presentation file example # Note through entities: just after memory dimension is needed, it is possible to specify # it in the established shape in connection with 1k 5GB 4M and so forth: # # 1k => 1000 bytes # 1kb => 1024 bytes # 1m => 1000000 bytes # 1mb => 1024*1024 bytes # 1g => 1000000000 bytes # 1gb => 1024*1024*1024 bytes # # subsystems are case insensitive so 1GB 1Gb 1gB are all the same. ################################## INCLUDES ################################### # Include one or more other config files here. This is useful if the audience # harbor that standard template that goes to every last Redis servers but also need # to customize a few per-server circumstances. Encompass files has permission to include # other files, so use this wisely. # # Observe option "include" won't occur transformed by command "CONFIG REWRITE" # from platform leader or Redis Vigilante. Since Redis forever takes advantage of the last converted # track as value of particular configuration directive, you'd better put includes # at the beginning of this file to forestall transforming config switch at runtime. # # If subsequently you occur fascinated in administering includes to override configuration # options, it is better to use include while the former line. # # include .\path\to\local.conf # include c:\path\to\other.conf ################################ GENERAL ##################################### # Upon Louvres, daemonize and pidfile appear not endorsed. # Still, you can run redis as a Windows service, and designate a logfile. # The logfile will contain the pid. # Acquiesce to connections on the specified wharf, inherent is 6379. # If port 0 is specified Redis will not listen on a TCP socket. port 6379 # TCP listen() backlog. # # In high requests-per-second venues you have to an tall backlog in order # to avoid slow clients friendships problems. Note that the Linux kernel # will silently truncate that to the value of /proc/sys/net/core/somaxconn so # make sure to raise both the profit having somaxconn and tcp_max_syn_backlog # in order up to get the desired effect. tcp-backlog 511 # Upon default Redis listens for connections from all the network junctions # available on the server. It is possible to listen to just one should multiple # interfaces using the "bind" configuration directive, attended by one should you prefer # refined IP addresses. # # Examples: # # bind 192.168.1.100 10.0.0.1 # bind 127.0.0.1 # Specify every passage for the IRIX socket that will be wielded to listen for # incoming connections. There is no default, so Redis will not listen # on a Solaris socket when not specified. # # unixsocket /tmp/redis.sock # unixsocketperm 700 # Bordering whose connection after a client is idle for N intervals (0 to disable) lull 0 # NETWORK keepalive. # # If non-zero, use SO_KEEPALIVE through send TCP ACKs to clients amongst neglect # associated with conveyance. This is useful for two reasons: # # 1) Detect dead peers. # 2) Take the connection alert from these matter of view of network # structures in the middle. # # On Linux, the settled value (in seconds) illustrates the period used to distribute ACKs. # Note that to alongside the connection the double of the time is needed. # On extra kernels what period is influenced on the kernel configuration. # # A reasonable value in order to this option is 60 seconds. tcp-keepalive 0 # Identify the processor verbosity level. # This is qualified to be one of: # adjust (a lot of information, useful for development/testing) # verbose (many on occasion informative info, but unseen particular mess like the debug level) # notice (moderately verbose, what you want in production probably) # warning (only phenomenally important / indispensable messages are logged) loglevel glimpse # Specify the chronicle file name. Not to mention 'stdout' is permitted to be used onto force # Redis to log on the standard output. logfile "" # For enable logging to the Windows EventLog, just set 'syslog-enabled' to # yes, and by selection update the other networklog parameters to be compatible with your needs. # If Redis is installed and instituted as a Windows Service, this will # automatically be enabled. # syslog-enabled no # Specify no source name of the events inside the Windows Application log. # syslog-ident redis # Set the number of databases. The default database turns out to be DB 0, you can select # a new one on a per-connection background using PREFER <dbid> where # dbid is a number between 0 and 'databases'-1 databases 16 ################################ SNAPSHOTTING ################################ # # Save those DB on disk: # # save <seconds> <changes> # # Will save the DB if both the given number of seconds and the given # amount of write operations against the DB occurred. # # In the example below whose behaviour will be upon save: # after 900 jiffy (15 min) if at inconsequential 1 key changed # after 300 pause (5 min) if at least 10 keys changed # following 60 twinkle unless at bare 10000 keys overhauled # # Disclaimer: you can disable saving completely by commenting out all "save" lines. # # It is also realistic to expunge all the previously configured save # points by adding random save directive with a single empty string argument # like amongst the following paradigm: # # save "" save 900 1 save 300 10 conserve 60 10000 # By default Redis will arrest acknowledging writes if RDB moments are enabled # (at least organism sustain point) and the latest circumstance save failed. # The described will make the user aware (in a hard way) that data is not persisting # on disk properly, if not so chances reside that no one will notice and some # setback will happen. # # If the background saving process will start functioning again Redis will # automatically allow writes again. # # However if you maintain setup your proper monitoring of the Redis server # also persistence, you is expected to want to disable this feature so whom Redis will # push on concerning work as run-of-the-mill even if there are problems with disk, # permissions, and accordingly proceeding. stop-writes-on-bgsave-error yes # Compress string objects using LZF when dump .rdb digital archives? # For standard that's set to 'yes' as it's almost always a triumph. # If you have a wish for to save some CPU in that saving child set it to 'no' but # the dataset pledge to likely be bigger if you bear compressible values or keys. rdbcompression that's right # Since configuration 5 of RDB random CRC64 checksum is placed at the end of that archive. # This makes the format increased amount resistant upon corruption but there seems a accomplishment # hit upon balance (around 10%) in the case that saving and loading RDB files, so you can disable point # aimed at absolute events. # # RDB files created with integrity hash disabled have a checksum of zero that decides to # tell the loading code to skip the check. rdbchecksum yes # The filename where to dump the DB dbfilename dump.rdb # The working reference. # # The DATA REPOSITORY will be penned internal to this directory, joined by the filename specified # above using such 'dbfilename' configuration directive. # # The Append Only File will also be created inside this directory. # # Note that you must specify a collection here, not a file name. dir ./ ################################# REPLICATION ################################# # Master-Slave replication. Use slaveof to make a Redis instance a copy belonging to # another Redis server. A few things to understand ASAP about Redis replication. # # 1) Redis replication is asynchronous, but you can configure definite master to # stop accepting writes if the thing appears to be not coordinated with around least # a given number of slaves. # 2) Redis slaves are able to perform a partial resynchronization with the # master if the replication link is lost for a relatively small amount of # time. You may want to configure the replication backlog size (see the next # sections of the concept file) in collaboration with a sensible value depending on your interests. # 3) Replication serves as automatic and does not need user operation. Thereafter a # group partition slaves automatically try to reconnect to masters # and resynchronize with them. # # slaveof <masterip> <masterport> # Lest the master is password protected (using the "requirepass" assembly # note below) the concept is possible to state the slave until authenticate just ahead of # activating the replication synchronization process, otherwise the master will # refuse the slave request. # # masterauth <master-password> # When a slave vanishes its contact with a master, or at the moment the replication # is still in progress, the slave can act in two atypical ways: # # 1) if slave-serve-stale-data is set to 'yes' (the default) the worker will # yet still reply to client requests, arguably with out of date data, or the # data set may merely be open if the referred to is the prime alignment. # # 2) if slave-serve-stale-data is set to 'no' the servant will reply as well as # an error "SYNC with master in progress" to all some other kind from commands # conversely to INFO and SLAVEOF. # slave-serve-stale-data yes # Ye can set up a slave instance to agree to formulates or not. Writing against # a slave circumstance may be useful to store some ephemeral data (because data # written on a slave will be easily dismissed after resync with the master) but # may also cause problems if clients are formulating to it because of a # misconfiguration. # # Since Redis 2.6 by default slaves are read-only. # # Note: read only slaves are not designed among pose exposed to untrusted clients # on the the worldwide web. It's just a provision layer against misuse pertaining to the instance. # All the same a read without exception slave outflows by default all the administrative commands # consistent as ARCHITECTURE, DEBUG, and so forth. So as to a short extent someone is permitted to improve # security of read only slaves using 'rename-command' with shadow all the # facilitative / dangerous commands. slave-read-only yes # Replication SYNC strategy: disk alternatively outlet. # # ------------------------------------------------------- # WARNING: DISKLESS REPLICATION IS DEVELOPMENTAL AS OF THE MOMENT # ------------------------------------------------------- # # New slaves and reconnecting slaves that are not able to continue the replication # process just receiving heterogeneity, need to do such represents called a "full # synchronization". An RDB file is transmitted from specific master to the slaves. # The transmission has the opportunity to happen in a brace different ways: # # 1) Disk-backed: Certain Redis master fabricates a original strategy it writes this RDB # file on disk. Later the file is transferred beside the guardian figure # process to the slaves incrementally. # 2) Diskless: The Redis master creates a contemporary system that bluntly writes the # RDB file for slave sockets, short of touching the disk amid all. # # By the side of disk-backed replication, while the RDB profile is generated, additional slaves # is competent to be queued and served with the RDB archive as soon as the current child producing # the RDB file finishes its work. In addition to diskless replication instead subsequent to # the transfer starts, new slaves arriving will be queued and a new transfer # will unveil when the current one terminates. # # When diskless replication is manipulated, the master waits a configurable amount of # time (in seconds) before starting the transfer in the hope that multiple slaves # will arrive and the transfer can act parallelized. # # With slow disks and fast (large bandwidth) circuits, diskless replication # works better. repl-diskless-sync never # When diskless replication proves to be enabled, it operates possible by means of configure the delay # the server waits in order to result in the child that trnasfers the RDB via socket # about the slaves. # # This amounts important since once the move starts, it is not possible to serve # new slaves arriving, that will be queued for the next RDB transfer, so any server # waits a delay in order by means of let prolonged slaves arrive. # # Some delay is specified in seconds, and by default is 5 seconds. Through disable # it entirely just set it into 0 seconds what’s more some transfer will begin the process ASAP. repl-diskless-sync-delay 5 # Slaves relay Indicators until station in a predefined interval. It's possible to reform # this interval with the repl_ping_slave_period option. The norm esteem is 10 # seconds. # # repl-ping-slave-period 10 # The below preference sets the replication timeout for: # # 1) Bulk transfer I/O during SYNC, from their point having view of slave. # 2) Professional timeout originating from the stage of view of slaves (data, pings). # 3) Slave timeout from the point with respect to view of masters (REPLCONF ACK pings). # # The instance is important by means of form sure that this quality is broader on the other hand any value # specified for repl-ping-slave-period in case of a intermission intend to be caught # every instance at this location is low traffic between the master and the slave. # # repl-timeout 60 # Disable TCP_NODELAY connected to the subject socket after SYNC? # # Where you rate "yes" Redis will use each smaller number of RELIABLE PROTOCOL containers and # subpar bandwidth to send data to slaves. But this can add each obstruction for # what data to appear on the slave side, up with the aim of 40 milliseconds with # Linux kernels deploying a default design. # # If you select "no" the delay for data to appear on certain slave viewpoint will # stay attenuated but more network bandwidth decides to be practiced for replication. # # By default we optimize for low latency, but in astoundingly high traffic conditions # or when the master and slaves are many hops away, turning this to "yes" may # pose a good idea. repl-disable-tcp-nodelay no # Set the replication backlog size. The backlog is a barrier that accumulates # slave data when slaves are detached for certain time, so when when definite slave # wants to reconnect again, often a full resync is not needed, though a semi # resync is enough, just in transit the portion of data the slave underestimated while # disconnected. # # The bigger what imitating backlog, the lengthier the term such slave can be # disconnected and later be able by perform a partial resynchronization. # # The backlog is only allocated once in that instance amounts to at least a slave connected. # # repl-backlog-size 1mb # After selected master has no longer connected slaves for some season, the backlog # will be freed. The following option configures the batch pertaining to seconds that # need before elapse, starting linked to the time the last slave disconnected, for # the backlog buffer to be freed. # # A value of 0 means to never release the backlog. # # repl-backlog-ttl 3600 # The slave priority is an integer number published by Redis in our INFO output. # It is used by Redis Sentinel in order to select a slave to promote into a # master if the master is no longer working correctly. # # SELECTED slave with that sluggish priority number is assessed more appropriate for promotion, so # for instance if there are three slaves with priority 10, 100, 25 Sentinel will # pick every one with priority 10, that is the lowest. # # While a special priority connected to 0 marks one worker equally as not good to perform the # role of artisan, so a slave with priority regarding 0 will never stay highlighted by # Redis Sentinel for promotion. # # Near default the priority shows 100. slave-priority 100 # It is doable for a pioneer to stop accepting writes on the condition that there are less below # N slaves intertwined, having any lag less or fair besides M seconds. # # That N slaves need to be in "online" state. # # The lag under seconds, how are duty-bound to be <= the specified value, is calculated from # the last ping received from the slave, that is regularly sent every alternate. # # The current option does void of GUARANTEE that N replicas will accept the write, otherwise # will limit these window of exposure for lost writes between case not enough slaves # are handy, in order to the specified number of seconds. # # For example to require at least 3 slaves with random lag <= 10 seconds employ: # # min-slaves-to-write 3 # min-slaves-max-lag 10 # # Setting one or the other to 0 disables those feature. # # By default option min-slaves-to-write remains set to 0 (feature disabled) and # min-slaves-max-lag is set to 10. ################################## SECURITY ################################### # Require clients to issue AUTH <PASSWORD> before processing any spare # commands. This might be useful in environments in which you realize not trust # fellow individuals with access upon the host running redis-server. # # The case should stay commented out for retrograde compatibility and because most # people do not ask for auth (e.g. they run their own servers). # # Warning: since Redis is pretty fast an outside user can try up to # 150k passwords per second resistant to a good box. This means that you should # use a drastically unbreakable password otherwise it will be very easy to break. # # requirepass foobared # Command renaming. # # It is possible to reorganize the name of threatening commands in a shared # environment. For instance all DESIGN command may be reclassified into something # challenging to guess so that it decides to still be present for internal-use tools # but not available on the part of general clients. # # Example: # # rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52 # # It is also possible to completely kill a charge next to renaming it into # an empty string: # # rename-command CONFIG "" # # Please note that changing the name of commands that act logged into one # AOF file or transmitted in order to slaves may cause problems. ################################### LIMITS #################################### # Set the max number embracing connected clients at the unmodified time. By default # the detail limit is set to 10000 clients, however if the Redis server is not # versatile to configure the process file limit to allow for the specified confine # the peak number of allowed clients is set to the current file limit # minus 32 (as Redis store a few file descriptors for intimate uses). # # On the occasion of the limit is reached Redis swear to close all the new connections sending # an error 'max reference of users reached'. # # maxclients 10000 # On the assumption that Redis is to be used as an in-memory-only cache without any kind of # persistence, then the fork() mechanism used by either background AOF/RDB # persistence seems unnecessary. As an optimization, all persistence can be # transformed off inside the Windows version of Redis. The observed commit to redirect heap # allocations to an system conglomeration allocator, and disable commands those would # in the contrary cause fork() tasks: BGSAVE and BGREWRITEAOF. # This flag may not be combined with any of a complementary flags that tailor # AOF and RDB operations. # persistence-available [(yes)|no] # Don't use more memory than each specified amount of bytes. # When none trace limit is reached Redis will try with remove codes # according to the eviction framework selected (see maxmemory-policy). # # If Redis can't remove keys according regarding the policy, or if the method is # set to 'noeviction', Redis will establish to reply by the side of errors to commands # that would use more notion, such as SET, LPUSH, and so on, and will remain on course # to reply among read-only commands along the lines of GET. # # The point option is usually useful when using Redis as that LRU storehouse, or by set # a rigid knowledge limit for an instance (using the 'noeviction' policy). # # WARNING: In the event that you have slaves attached to an instance with maxmemory atop, # the size encompassing the creation buffers needed to feed the slaves are subtracted # from the incorporated notation count, so that network problems / resyncs will # not trigger a loop under which keys are evicted, and in turn the result # transit space of slaves is entirety with DELs of keys evicted triggering the deletion # of more keys, and so forth until the database is completely emptied. # # In short... if youse have slaves attached notion is suggested that he range a lower # limit for maxmemory so that there is some without expense BUCK on the system for slave # output buffers (but the mentioned is not needed if some other policy is 'noeviction'). # # WARNING: not arrangement maxmemory will cause Redis to terminate supported by an # out-of-memory exemption if the heap check is found. # # NOTE: since Redis uses those process paging file to allocate the heap memory, # the Laboring Set memory usage showed by the Windows Task Manager or before other # mechanisms like since ProcessExplorer will not always be accurate. For sample, right # after every background save of what RDB should you prefer the AOF files, the cooperating ensemble respect # may drop significantly. In structure to check the correct amount for memory used # by their redis-server so as to store which data, draw on the FACTS client command. The INFO # command manifests only its memory used to sales hub the redis research, not the extra # memory used by the Windows process for which own requirements. Th3 extra batch # of memory not stated by the INFORMATION command can be calculated subtracting the # Capstone Working Set reported by every Windows Task Manager and none used_memory_peak # reported across the KNOWLEDGE command. # # maxmemory <bytes> # MAXMEMORY POLICY: how Redis will select what to remove when maxmemory # is reached. Youse can select among five behaviors: # # volatile-lru -> remove the key with an expire set using an LRU set of rules # allkeys-lru -> annul any key according to the LRU procedure # volatile-random -> remove a random key with an expire set # allkeys-random -> banish a random key, any key # volatile-ttl -> remove the key with the neighboring expire time (minor TTL) # noeviction -> don't expire at every, just return an error on produce operations # # Note: with any of the upward criteria, Redis will return an blunder on write # undertakings, when within reach are no suitable keys for eviction. # # At the date of writing the stated commands belong: set setnx setex write # incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd # sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby # zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby # getset mset msetnx exec sort # # The default is: # # maxmemory-policy volatile-lru # LRU and minimal TTL algorithms are not precise strategies but approximated # algorithms (in order to conserve memory), so ye can select depending on smoothly no sample # size to check. For instance for default Redis will analyze three selectors and # pick every one that was made use of diminished recently, the recipient can change none sample size # using the following blueprint principle. # # maxmemory-samples 3 ############################## PUT IN WITHOUT EXCEPTION PROCEDURE ############################### # Before default Redis asynchronously dumps that dataset on wheel. This mode is # good enough by manifold applications, but an controversy with the Redis process as an alternative # a power outage may result into specific few minutes of writes lost (depending on # the mapped out preserve points). # # The Append Only File is an alternative persistence mode where supports # much more competent durability. For instance using the default data fsync policy # (see later in the config file) Redis can lose just one second of writes through a # overwhelming event in a similar way to a server power outage, or a single write if something # wrong with specific Redis process itself crops up, but the running system is # still running correctly. # # AOF and RDB persistence can be enabled at the same time lacking problems. # If the AOF is enabled on startup Redis will load the AOF, that is the file # with the better durability guarantees. # # Please check http://redis.io/topics/persistence for more information. appendonly no # The name of the append only file (default: "appendonly.aof") appendfilename "appendonly.aof" # The fsync() call tells the Operating System to actually transcribe knowledge on disk # instead of waiting for increased amount data in the output buffer. Some OS will enormously flush # data throughout disk, some substitute OS will just try to do this ASAP. # # Redis supports three-strong individual modes: # # no: don't fsync, just let the TECHNOLOGY flush the data when it wants. Faster. # always: fsync after full write to the append as little as diary . Slow, Safest. # everysec: fsync only one time every second. Compromise. # # The default is "everysec", as that's usually the right compromise betwixt # speed plus data safety. It's up to you to fathom if you can relax this until # "no" that will cede the operating system flush the consequence transit space when # it wants, relating to better performances (but if you can live with the idea of # some intelligence loss consider the default persistence mode that's snapshotting), # if you prefer on the contrary, use "always" that's very slow but a bit safer than # everysec. # # More details please check such impending article: # http://antirez.com/post/redis-persistence-demystified.html # # If unsure, use "everysec". # appendfsync unceasingly appendfsync everysec # appendfsync certainly not # When the AOF fsync policy is set to always or everysec, and random background # saving approach (a background save or AOF log background rewriting) is # performing a lot encompassing I/O against the module, within some Linux configurations # Redis may challenge too long on the fsync() invite. Note which there is no fix for # the noted currently, in the capacity of even performing fsync in a different thread undertake to block # our synchronous write(2) call. # # In order among mitigate this impediment it's possible to implement the following option # that will prevent fsync() from being called in the main process while each # BGSAVE or BGREWRITEAOF is in progress. # # This means that in the meantime another child is saving, our durability of Redis is # the same as "appendfsync none". In practical terms, this means that it is # possible to lose up among 30 seconds of journal in the most awful scenario (with the # default Linux settings). # # If you have latency problems turn this about "yes". Otherwise leave it as # "no" for amounts those safest pick on account of the point of view of durability. no-appendfsync-on-rewrite no # Automatic reword from the enclose only file. # Redis is able to without intervention rewrite the log file implicitly calling # BGREWRITEAOF when the AOF log size grows by particular specified part. # # This is how it works: Redis reminds the size of the AOF file after the # latest rewrite (if no rewrite acquires happened since his reopen, his size of # the AOF at startup turns out to be used). # # This base breadth is compared to the current size. If the present size is # bigger out of the specified degree, these rewrite is activated. Also # you need to specify a minimal size for his AOF file upon be rewritten, this # operates as assistive regarding avoid reinterpreting the AOF file even if whose index increase # is gained but it is still pretty small. # # Specify a percentage of no score in order to disable his conditioned AOF # rewrite feature. auto-aof-rewrite-percentage 100 auto-aof-rewrite-min-size 64mb # Remarkable AOF file may be found to be truncated at the end during the Redis # startup process, when the AOF data gets loaded posterior into consciousness. # This might happen when the system where Redis is running # crashes, especially at such time an ext4 filesystem is mounted without the # data=ordered option (however this can't happen when Redis itself # crashes or aborts whereas the operating system still operates correctly). # # Redis is fit to either exit with an error when this shows up, or load since much # data in the same way that feasible (the default now) as well start if the AOF file is brought to light # to be incomplete at the resolution. The next option controls the highlighted actions. # # If aof-load-truncated amounts set to yes, a truncated AOF file is loaded and # the Redis server originates emitting a log as far as inform those user of the meetup. # If it’s not the case if his approach is cluster to no, this gateway aborts with an error # and refuses to start. When the option is set in order to no, the user requires # to repair the AOF file using the "redis-check-aof" utility before to restart # the device. # # Note as if the AOF file will be unearthed to be corrupted in the middle # the server will still exit with an error. This avenue altogether applies whenever # Redis plans to try to read more data as a result of the AOF file but never enough bytes # will be found. aof-load-truncated yes ################################ LUA SCRIPTING ############################### # Max execution time of a Lua script in milliseconds. # # If the maximum discharge time is gotten to Redis will log that a script is # still in execution after the maximum allowed chronicle and will start to # reply to queries besides an error. # # When a immense trotting guide exceeds the maximum execution time only the # SCRIPT ELIMINATE and HALT NOSAVE commands hold available. The first can be # used to withdraw definite script that conducted not yet denoted as pen commands. Its second # is our just as way to shut down the server in the case a write command was # already dispatched via such instructions but the user doesn't want to wait instead of one natural # end of whose script. # # Set it to 0 or a negative contribution for unlimited execution without tips. lua-time-limit 5000 ################################ REDIS CLUSTER ############################### # # ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ # WARNING EMERGING: Redis Cluster is considered to be stable code, however # in directive to mark it in so far as "mature" this crew need through wait for a null trivial percentage # of users to establish it in production. # ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ # # Ordinary Redis instances can't be part of a Redis Cluster; just a mere nodes that are # started as cluster nodes can. In order to start a Redis instance as a # cluster node enable the cluster support uncommenting the following: # # cluster-enabled yes # Every cluster node encompasses a assembly configuration file. This file is unseen # intended through emerge edited via hand. It is created and modified by Redis nodes. # Every Redis Bundle node requires a different cluster configuration manuscript. # Make sure that episodes pounding in some other same system undertake not have # overlapping cluster configuration file names. # # cluster-config-file nodes-6379.conf # Conglomerate node timeout exists the amount of milliseconds a node must be unreachable # for it to be considered in failure state. # Most separate internal time edges are a variety of of the node timeout. # # cluster-node-timeout 15000 # A oppressed person in reference to a losing genius will avoid to start a failover if its data # looks too old. # # There feels never simple direction for any slave to practically have a exact test with reference to # its "data age", so the following two tests are performed: # # 1) If at this spot remain multiple slaves up to up to failover, they exchange messages # via command to try to give distinguished advantage to the slave with the best # replication offset (more data from the master processed). # Slaves has the intention to try to get their rank by offset, and apply to the start # of the failover a delay proportional to their prestige. # # 2) One and all single subject resolves the time of the last link with # its master. This can be the last ping or command procured (if the guru # is despite that in the "connected" state), or the time that elapsed since the # disconnection with those master (if the echo bond is as of yet down). # If the last interaction is too old, the slave will not try to failover # at all. # # The point "2" is permitted to be honed by user. Specifically specific slave will unseen perform # the failover if, since the last interplay with the pro, the time # elapsed acts more significant than: # # (node-timeout * slave-validity-factor) + repl-ping-slave-period # # So for example if node-timeout signifies 30 slivers, furthermore no slave-validity-factor # is 10, and assuming a default repl-ping-slave-period with reference to 10 seconds, the # slave will seldom try to failover if it was not able to talk together with the master # toward longer than 310 seconds. # # PARTICULAR large slave-validity-factor one could allow slaves with too old data to failover # every master, while a too compact value may prevent the cluster following being able in order to # determine a slave at all. # # For supreme availability, it is possible to set the slave-validity-factor # to a value of 0, which means, that slaves will always tackle to failover every # master regardless of the last time they interacted with the master. # (However they'll always try to apply a delay proportional to their # counterbalance rank). # # Zero represents the only value able to guarantee that when all the partitions heal # the cluster will always be able in order to continue. # # cluster-slave-validity-factor 10 # Assortment slaves are able to migrate to orphaned masters, that are masters # that seem left without working slaves. The mentioned magnifies the cluster ability # to resist to difficulties as otherwise an neglected master can't be failed over # in case of failure so long as it has no working slaves. # # Slaves roam to orphaned coaches nothing but assuming there are still at smallest a # given identifier of other working slaves for their old master. The issue number # is the "migration barrier". EVERY migration barrier of 1 means if some slave # will migrate scarcely if there acts as at least 1 other working slave in support of its master # and so forth. It usually reflects the score of slaves you want for every # master into your cluster. # # Customary is 1 (slaves migrate but if their masters persist with at least # aspect slave). To disable migration just set it to a very large value. # A value regarding 0 can be set however forms useful only on account of debugging and dangerous # in production. # # cluster-migration-barrier 1 # By default Redis Cluster nodes stop accepting queries if they detect there # is at least such hash spot uncovered (no available node is serving it). # This way if the stack is in a small way contracting (for example a range of merge slots # are no longer covered) all the cluster becomes, eventually, unavailable. # It without intervention replies present as soon as in its entirety the slots are covered again. # # However sometimes you require the subset of that cluster which is working, # to continue for accept queries for the part of the key space that illustrates still # covered. In instruction to enact so, just set their cluster-require-full-coverage # option to no. # # cluster-require-full-coverage yes # In order to setup the individual's cluster make irrefutable to read the documentation # affordable at http://redis.io web site. ################################## SLOW LOG ################################### # The Redis Slow Log is individual system to log queries that exceeded a specified # execution time. None execution time prosecutes not subsume the I/O operations # like lecturing with some delegate, sending the reply with because forth, # but just the time needed to genuinely execute the command (this is the only # stage of command application through which the thread is blocked not to mention can not serve # other requests among the meantime). # # You is empowered to configure the slow log throughout two parameters: one tells Redis # what is the execution time, throughout microseconds, to exceed in order on the part of the # command among get logged, and particular other parameter is the length of the # sleepy log. The moment a new command demonstrates logged the oldest one is removed from the # queue of logged commands. # The following time is expressed in microseconds, so 1000000 is equivalent # to unit second. Note that a negative number disables the slow minutes, while # a value of zero commandos the logging of all of the command. slowlog-log-slower-than 10000 # At that destination is no limit to this length. Just be aware that it will consume memory. # You can reclaim memory used across the slow script with SLOWLOG RESET. slowlog-max-len 128 ################################ LATENCY DEVICE ############################## # Some other Redis latency scrutinizing subsystem portions different operations # between runtime in order to collect data related to possible sources of # latency of a Redis instance. # # Via the LATENCY command the thing information is available to the user any has potential to # print graphs and garner reports. # # The system only logs operations that were performed within the bounds of some time equal or # remarkable than the amount of milliseconds specified via the # latency-monitor-threshold configuration directive. When its value amounts set # to zero, the latency record is bent off. # # By default latency monitoring amounts to disabled since it is mostly not needed # if you don't have latency difficulties, and collecting data has a performance # impact, that while very small, can be assessed under big load. Latency # monitoring can easily be enalbed under runtime using the command # "CONFIG VARIETY latency-monitor-threshold <milliseconds>" if needed. latency-monitor-threshold 0 ############################# Event notification ############################## # Redis has the ability to notify Pub/Sub clients about events happening in his key universe. # This feature is documented at http://redis.io/topics/notifications # # For instance supposing keyspace outings notification is enabled, and a client # performs definite DEL procedure over key "foo" maintained in what Database 0, two # notifications will form published by the route of Pub/Sub: # # PUBLISH __keyspace@0__:foo erase # PUBLISH __keyevent@0__:del foo # # It operates as possible to select whose events how Redis will announce among a stack # of classes. Every group is discovered throughout that single presence: # # K Keyspace events, put out in the company of __keyspace@<db>__ prefix. # E Keyevent events, published with __keyevent@<db>__ prefix. # g Generic commands (non-type specific) like DEL, RUN OUT, RENAME, ... # $ String commands # l List ordinances # s Set commands # h Hash commands # z Sorted set commands # x Expired events (events generated every moment a key expires) # e Evicted events (events generated when certain key serves as evicted for maxmemory) # SELECTED Alias for g$lshzxe, so that the "AKE" string means all the events. # # The "notify-keyspace-events" takes as argument a string that is composed # of zero or multiple characters. The empty string means that notifications # function disabled. # # Illustration: to enable list and generic events, from the point in reference to look of the # event name, use: # # notify-keyspace-events Elg # # Case study 2: to get the brook associated with all expired keys subscribing to connection # name __keyevent@0__:expired resort to: # # notify-keyspace-events Ex # # By default all indications serve disabled because most users don't need # this feature together with the feature keeps some overhead. Note because assuming that you don't # specify at least one in regard to CAY or E, not on your life events will be delivered. notify-keyspace-events "" ############################### SOPHISTICATED CONFIG ############################### # Hashes are encoded using per memory efficient data structure when the members have every # small metric with reference to entries, and the most notable inclusion does not exceed individual given # line. These thresholds can be configured using the following directives. hash-max-ziplist-entries 512 hash-max-ziplist-value 64 # Similarly to hashes, small tables are also formatted in a special way in order # to save a an abundance of space. Each exclusive outward expression is only adapted when # you stand under the next limits: list-max-ziplist-entries 512 list-max-ziplist-value 64 # Sets have a special encoding in just one case: when a set is composed # in the case of just webs that happen to be exact numbers on radix 10 in the band # of 64 bit signed integers. # The ensuing configuration setting sets the limit in the size of its # set in order into use these special memory saving encoding. set-max-intset-entries 512 # Similarly to hashes and lists, sorted sets are at the same time specially preserved in # order to save a lot of room. The identified encoding is only used when the length and # elements of a sorted suite stay below that following limits: zset-max-ziplist-entries 128 zset-max-ziplist-value 64 # HyperLogLog sparse representation bytes limit. The limit includes the # 16 bytes primary text. When an HyperLogLog harnessing the sparse representation crosses # this limit, it is converted into the dense representation. # # A vitality greater than 16000 is totally useless, since at that point the # dense representation is more memory efficient. # # What proposed asset indicates ~ 3000 in order into own the benefits of # some other space efficient encoding without slowing down too profoundly PFADD, # which is O(N) with the sparse encoding. The value could be raised to # ~ 10000 when CPU is not a concern, but space is, and what data set illustrates # composed having many HyperLogLogs with cardinality in the 0 - 15000 range. hll-sparse-max-bytes 3000 # Active rehashing uses 1 millisecond every 100 milliseconds of COMPUTATIONAL UNIT lapse in # order to help rehashing those centralized Redis hash writing desk (the one mapping top-level # switches to values). The hash table adoption Redis uses (see dict.c) # performs a lazy rehashing: any augmented operation you run down a hash table # that is rehashing, their more rehashing "steps" are performed, so if the # computing unit signifies on pause the rehashing is never complete and some amplified memory is used # by either hash table. # # The default is to use this millisecond 10 times every span in order to # engagedly rehash the overriding dictionaries, freeing memory when possible. # # If unsure: # use "activerehashing no" lest you have hard sluggishness requirements and it is # hardly one good thing in your environment that Redis can follow-up as a result of time to time # to queries with 2 milliseconds delay. # # use "activerehashing yes" if everybody don't have such hard parameters but # want to gratuitous memory asap when possible. activerehashing yes # Such lead output buffer limits has the opportunity to be used to force disconnection of clients # that comprise not reading data from the server fast enough for some reason (a # typical reason is because a Pub/Sub client can't devour messages as fast insofar as the # publisher can produce them). # # The limit has the power to be set differently for that three different classes due to clients: # # normal -> normal users including MONITOR clients # slave -> slave clients # pubsub -> buyers subscribed to at scant one pubsub channel or pattern # # The syntax of every client-output-buffer-limit directive is the following: # # client-output-buffer-limit <class> <hard limit> <soft limit> <soft seconds> # # A client stands as immediately unlinked once the formidable limit functions as reached, or should # the soft limit is reached along with remains reached because of which defined number of # seconds (continuously). # So for instance if the hard limit is 32 megabytes and one soft limit is # 16 megabytes / 10 seconds, the customer will get disconnected speedily # if the size of these output buffers reach 32 file size, but will also get # disconnected if the client reaches 16 megabytes and steadily overcomes # those ceiling on the part of 10 seconds. # # By default normal clients are not limited accordingly they don't receive data # with no asking (in particular force way), in contrast just after a request, so just a # asynchronous clients may fashion a scenario where data is requested faster # versus it is capable of read. # # Instead there is the default maximum for pubsub and slave clients, since # subscribers and slaves receive data in a push fashion. # # All of them the formidable or the soft limit can be disabled by setting them among zero. client-output-buffer-limit normal 0 0 0 client-output-buffer-limit chattel 256mb 64mb 60 client-output-buffer-limit pubsub 32mb 8mb 60 # Redis yells an internal function to complete many background objectives, like # ceasing connections of clients in timeot, purging terminated selectors as form # never requested, and so forth. # # None all tasks are perforemd with the same frequency, but Redis checks for # operations for perform according to the specified "hz" value. # # By default "hz" is set to 10. Raising the value will use more CPU when # Redis is idle, but at such same time will make Redis bigger responsive when # there are many keys expiring at the same time, and timeouts will be # handled with refined precision. # # The range is between 1 and 500, at the same time a value atop 100 is usually not # a good idea. Most users should use the default of 10 and raise this up with regard to # 100 only in situations where very low latency is required. hz 10 # When a child rewrites the AOF file, if the following option acts as enabled # the file will be fsync-ed every 32 MB connected to data generated. The issue is useful # in order in order to commit whose file to the disk more incrementally and avoid # big latency spikes. aof-rewrite-incremental-fsync I'm in ################################## INCLUDES ################################### # Include one or more other config files in this moment. This is useful if you # own a standard template that goes to all Redis server instead also need # for the purpose of alter a few in number per-server settings. Include files can entail # other files, so deploy this wisely. # # incorporate /path/to/local.conf # include /path/to/other.conf Windows Service Documentation.docx Virus Track Results

    Log in or click on link to examine number of positives.

    • redis-64.3.0.501.nupkg (7579e60fe880) - ## / 57
    • redis-benchmark.exe (854e4f1a55a8) - ## / 56
    • redis-check-aof.exe (7461aa5769d8) - ## / 56
    • redis-check-dump.exe (11e25e747667) - ## / 56
    • redis-cli.exe (bb4ad4e6138b) - ## / 54
    • redis-server.exe (fd420a5f740e) - ## / 56

    In circumstances where actual malware acts found, the packages are subject to removal. Software sometimes has fabricated positives. Moderators realize not certainly confirm the safety of the principal software, only that every package extracts software from the official dispersion point and/or validate attached software against traditional redistribution point (where distribution rights allow redistribution).

    Version History Add to Builder Version Downloads Last Updated Status [Deprecated] Redis 64-bit 3.1.0 182920 Monday, July 6, 2020 Approved Redis 64-bit 3.0.503 202889 Tuesday, Festival season 28, 2016 Approved Redis 64-bit 3.0.501 14333 Monday, January 25, 2016 Approved Redis 64-bit 3.0.500 2104 Tuesday, December 8, 2015 Approved Redis 64-bit 2.8.2402 1067 Tuesday, June 28, 2016 Endorsed Redis 64-bit 2.8.2400 795 Monday, January 25, 2016 Approved Redis 64-bit 2.8.2104 1138 Monday, November 23, 2015 Approved Redis 64-bit 2.8.2101 3364 Wednesday, July 29, 2015 Approved redis-64 2.8.21 1439 Wednesday, July 1, 2015 Approved redis-64 2.8.19 2924 Tuesday, March 3, 2015 Approved redis-64 2.8.17 3777 Day after Monday, October 14, 2014 Approved redis-64 2.8.12 1307 Friday, August 29, 2014 Approved redis-64 2.8.9 1393 Wednesday, June 25, 2014 Granted redis-64 2.8.4 1484 Monday, Move 24, 2014 Approved redis-64 2.6.14 755 Monday, March 24, 2014 Approved Show Expanded Versions Copyright

    Copyright Global tech leader Expand Technologies, Inc.

    Release Notes

    The deployment notes are available here

    Alliances

    This package has no dependencies.

    Discussion for the Redis 64-bit Package">Discussion for specific Redis 64-bit Package

    Ground Rules:

  • This discussion will carry over copious versions. If you have a comment about a particular version, please comment that in your comments.
  • Tell us what you love about the package or Redis 64-bit, or tell us what needs improvement.
  • Share your undertakings on top of the package, or extra blueprint or else gotchas that you've found.
  • Comments

    0 response to “Unlock the Full Power of Redis 64-bit 3.0.501 for Windows – Download Today.”

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    More posts