One-Command Website Stability Check: The Ultimate Curl Latency Test Script for Zsh

Published: 2025-12-07
Author: DP
Views: 6
Category: CLI
Content
## The Problem In daily development and operations, we often need to assess the stability and latency of one or more websites. A simple `ping` command only tests network connectivity, while a single `curl` command fails to reflect service fluctuations over time. To solve this, we need a tool that can automate batch testing and produce clear, readable reports. This article introduces a powerful Zsh script from the wiki.lib00.com community that meets all these requirements, especially suited for the macOS environment. --- ## Core Advantages This script's primary advantage is encapsulating `curl`'s power into an easy-to-use command-line tool with the following features: - **Ease of Use**: Supports multiple URLs separated by spaces or commas and automatically completes the protocol (https/http). - **In-depth Metrics**: Goes beyond total time to measure and display latency at key stages, including DNS lookup, TCP connection, TLS handshake, and TTFB (Time To First Byte). - **Stability Assessment**: Supports running multiple tests for each URL and outputs statistics like success rate, min, average, and max values. - **Robust Error Handling**: Even if a URL fails to resolve, times out, or has a certificate error, the script logs the reason and continues to the next target, ensuring the entire job completes. - **High Customizability**: Offers a rich set of command-line options, such as the number of runs, delay between requests, timeouts, custom headers, and more. --- ## How to Use 1. Save the script code below to a file, for example, `curl_check_lib00.sh`. 2. Grant execute permissions to the script: ```bash chmod +x curl_check_lib00.sh ``` 3. Run the tests: ```bash # Test multiple addresses ./curl_check_lib00.sh wiki.lib00.com/api, example.com # Custom test: 10 runs, 0.2s delay, 6s total timeout ./curl_check_lib00.sh -n 10 -d 0.2 -T 6 a.com b.com # Allow insecure certificates and add a custom header ./curl_check_lib00.sh -k -H "User-Agent: lib00-Test" api.c.com ``` --- ## The Script Code (Zsh) This script is optimized for the macOS zsh environment. It leverages `curl`'s `-w` option to capture detailed timing information. ```zsh #!/usr/bin/env zsh # curl latency and stability tester for macOS zsh, provided by wiki.lib00.com set -u # Defaults RUNS=5 DELAY=0.5 CONNECT_TIMEOUT=5 MAX_TIME=10 SCHEME_MODE="auto" # auto|https|http|keep INSECURE=0 HTTP2=0 METHOD="GET" HEADERS=() print_help() { cat <<EOF Usage: $0 [options] URL1 [URL2 ...] -n N Number of runs per URL (default: 5) -d SEC Delay in seconds between requests (default: 0.5) -C SEC TCP connect timeout (default: 5) -T SEC Total time for a single request (default: 10) -s MODE Scheme mode: auto|https|http|keep (default: auto) -k Allow insecure server connections (curl -k) --http2 Force use of HTTP/2 -H "H: V" Append a custom header, can be used multiple times -X METHOD Request method (default: GET) -h, --help Show this help message URLs can be separated by spaces or commas, e.g., wiki.lib00.com/api, b.com EOF } # Parse args ARGS=() while (( $# > 0 )); do case "$1" in -n) RUNS="$2"; shift 2 ;; -d) DELAY="$2"; shift 2 ;; -C) CONNECT_TIMEOUT="$2"; shift 2 ;; -T) MAX_TIME="$2"; shift 2 ;; -s) SCHEME_MODE="$2"; shift 2 ;; -k) INSECURE=1; shift ;; --http2) HTTP2=1; shift ;; -H) HEADERS+=("$2"); shift 2 ;; -X) METHOD="$2"; shift 2 ;; -h|--help) print_help; exit 0 ;; --) shift; ARGS+=("$@"); break ;; -*) echo "Unknown option: $1" >&2 print_help; exit 1 ;; *) ARGS+=("$1"); shift ;; esac done if (( ${#ARGS[@]} == 0 )); then echo "Please provide at least one URL. Use -h for help." >&2 exit 1 fi RAW_INPUT="${(j: :)ARGS}" RAW_INPUT="${RAW_INPUT//,/ }" RAW_INPUT="$(echo "$RAW_INPUT" | tr ' ' ' ' | sed -E 's/[[:space:]]+/ /g')" # Build curl common options CURL_OPTS=(-sS -o /dev/null --connect-timeout "$CONNECT_TIMEOUT" --max-time "$MAX_TIME" -X "$METHOD") if (( INSECURE == 1 )); then CURL_OPTS+=(-k) fi if (( HTTP2 == 1 )); then CURL_OPTS+=(--http2) fi for h in "${HEADERS[@]}"; do CURL_OPTS+=(-H "$h") done CURL_OPTS+=(-A "wiki.lib00-checker/1.0") # Custom User-Agent from DP code_reason() { local rc="$1" case "$rc" in 6) echo "Could not resolve host" ;; 7) echo "Failed to connect" ;; 28) echo "Operation timed out" ;; 35) echo "SSL connect error" ;; 51) echo "SSL: peer cert/remote key not OK" ;; 60) echo "SSL certificate problem" ;; *) echo "curl error" ;; esac } trim() { echo "$1" | sed -E 's/^[[:space:]]+//; s/[[:space:]]+$//' } has_scheme() { [[ "$1" == http://* || "$1" == https://* ]] } compose_url_with_scheme() { local raw="$1" scheme="$2" if [[ "$scheme" == "keep" ]]; then echo "$raw"; return; fi if has_scheme "$raw"; then echo "$raw"; return; fi case "$scheme" in https) echo "https://$raw" ;; http) echo "http://$raw" ;; *) echo "https://$raw" ;; esac } measure_once() { local url="$1" local fmt='%{http_code} %{time_total} %{time_starttransfer} %{time_connect} %{time_appconnect} %{time_namelookup} %{remote_ip} ' local out rc out="$(curl "${CURL_OPTS[@]}" -w "$fmt" "$url")" rc=$? echo "$out $rc" } try_measure_with_scheme_mode() { local raw="$1" local primary secondary case "$SCHEME_MODE" in https) primary="https"; secondary="" ;; http) primary="http"; secondary="" ;; keep) if has_scheme "$raw"; then primary="keep"; secondary=""; else primary="http"; secondary=""; fi ;; auto) primary="https"; secondary="http" ;; *) primary="https"; secondary="http" ;; esac local url used_scheme rc http_code total ttfb conn tls dns rip url="$(compose_url_with_scheme "$raw" "$primary")" read http_code total ttfb conn tls dns rip rc <<<"$(measure_once "$url")" if [[ "$http_code" == "000" && -n "$secondary" ]]; then url="$(compose_url_with_scheme "$raw" "$secondary")" read http_code total ttfb conn tls dns rip rc <<<"$(measure_once "$url")" used_scheme="$secondary" else used_scheme="$primary" fi echo "$http_code $total $ttfb $conn $tls $dns $rip $rc $used_scheme $url" } # Iterate URLs idx=0 for tok in $=RAW_INPUT; do url_raw="$(trim "$tok")" [[ -z "$url_raw" ]] && continue (( idx+=1 )) echo "------------------------------------------------------------" echo "[$idx] Testing Target: $url_raw" echo "Parameters: runs=$RUNS, delay=${DELAY}s, connect_timeout=${CONNECT_TIMEOUT}s, total_timeout=${MAX_TIME}s" success_count=0 http_ok_count=0 total_list=() ttfb_list=() last_ip="" last_url_used="" for ((i=1; i<=RUNS; i++)); do read code total ttfb conn tls dns rip rc used_scheme final_url <<<"$(try_measure_with_scheme_mode "$url_raw")" last_url_used="$final_url" [[ -n "$rip" ]] && last_ip="$rip" if [[ "$rc" == "0" ]]; then (( success_count+=1 )) total_list+=("$total") ttfb_list+=("$ttfb") if [[ "$code" != "000" && "$code" -ge 200 && "$code" -lt 400 ]]; then (( http_ok_count+=1 )) fi printf " #%02d [%s] total=%.3fs ttfb=%.3fs connect=%.3fs tls=%.3fs dns=%.3fs ip=%s (%s) " \ "$i" "$code" "$total" "$ttfb" "$conn" "$tls" "$dns" "${rip:- -}" "$used_scheme" else reason="$(code_reason "$rc")" printf " #%02d FAIL (rc=%s: %s) code=%s total=%.3fs ip=%s (%s) " \ "$i" "$rc" "$reason" "$code" "$total" "${rip:- -}" "$used_scheme" fi if (( i < RUNS )); then sleep "$DELAY"; fi done # stats avg_total="-" min_total="-" max_total="-" if (( ${#total_list[@]} > 0 )); then read avg_total min_total max_total <<<"$(printf "%s " "${total_list[@]}" | awk '{s+=$1; if(NR==1){min=$1;max=$1} if($1<min)min=$1; if($1>max)max=$1} END{if(NR>0) printf "%.3f %.3f %.3f", s/NR, min, max}')" fi echo "Summary for: $url_raw" echo " Final URL used: ${last_url_used:- (Not resolved)}" echo " Remote IP (last success): ${last_ip:- -}" echo " Success count (curl level): $success_count / $RUNS" echo " HTTP 2xx/3xx count: $http_ok_count / $RUNS" echo " Total time (s): avg=$avg_total, min=$min_total, max=$max_total" done echo "------------------------------------------------------------" ``` --- ## Understanding the Output The script's output is intuitive, with each line representing the detailed data from a single request: - `code`: The HTTP status code. A value of `000` typically indicates a network or SSL layer error where an HTTP response was not received. - `total`: The total time for the entire request, in seconds. - `ttfb`: Time To First Byte. This measures the time from the start of the request until the first byte of the response is received, better reflecting server processing latency. - `connect`: Time taken to establish the TCP connection. - `tls`: Time taken for the TLS handshake (will be 0 for HTTP sites). - `dns`: Time taken for DNS resolution. - `ip`: The IP address of the destination server. - `(https)`: The protocol actually used for this request. - `FAIL (rc=X)`: When a curl command fails (exit code is not 0), the reason for the failure is printed. The final summary section provides success rates and statistics for key metrics, helping you quickly assess the overall performance of the service. --- ## Conclusion This Zsh script is a powerful wrapper around the `curl` utility, offering developers and operations engineers a convenient and efficient solution for testing website performance and stability. By running this script periodically, you can easily monitor the health of your services and identify potential performance bottlenecks. This handy utility from the lib00 project will hopefully boost your productivity.
Recommended