Proxy Methods
Fetch external content through the TelemetryOS proxy service to handle CORS restrictions with intelligent caching and bandwidth management.
Proxy Methods
Fetch external content through the TelemetryOS proxy service with built-in fleet-wide caching.
Overview
The Proxy Methods let you fetch data from any external API or URL. Requests are routed through the TelemetryOS platform, which automatically handles CORS restrictions and caches responses across your entire device fleet.
You can call proxy().fetch() and use the response directly -- the platform handles caching for you. There is no need to build your own caching layer using the store or other mechanisms.
About CORS: TelemetryOS applications run in browser contexts where Cross-Origin Resource Sharing (CORS) policies apply. The Proxy API solves CORS limitations by routing requests through the TelemetryOS platform. For complete CORS information, see CORS.
Importing
import { proxy } from '@telemetryos/sdk';Methods
fetch()
Fetch content from external URLs through the platform proxy.
Signature:
async fetch(input: RequestInfo | URL, init?: RequestInit): Promise<Response>Parameters:
input- URL as string, URL object, or Request objectinit- Optional fetch options (method, headers, body, etc.)
Returns: Promise<Response> - Standard Response object
Example:
const response = await proxy().fetch('https://query1.finance.yahoo.com/v8/finance/chart/AAPL?interval=1d&range=5d');
const data = await response.json();
// Use the data directly - the platform caches it for you
console.log(data.chart.result[0].meta.regularMarketPrice);Caching Behavior
The proxy includes built-in fleet-wide caching. When any device in your fleet fetches a URL, the response is cached on the platform so that subsequent requests from other devices are served from cache. You do not need to implement any caching logic in your application.
How It Works
- Device calls
proxy().fetch()-- the request is sent to the TelemetryOS platform - Platform checks its cache -- a two-tier cache (in-memory + database) looks for a valid cached response
- Cache hit: the cached response is returned immediately without contacting the external server
- Cache miss: the platform fetches from the external server, caches the response, and returns it
Because the cache is shared across your entire fleet, even if you have hundreds or thousands of devices requesting the same URL, the external server typically sees only one request per cache period. This prevents rate limiting and reduces bandwidth usage.
Fleet Bandwidth Management
Cache control is critical when scaling to large device fleets. Consider a deployment with 1,000 devices all polling the same API every minute. Without caching, that's 1,000 requests per minute hitting the external server. With a one-hour cache, only one request per hour reaches the source:
// 1,000 devices fetching the same menu data — only one request per hour hits the API
const response = await proxy().fetch('https://api.example.com/menu-items', {
headers: { 'Cache-Control': 'max-age=3600' }
});
const menuData = await response.json();Choose your max-age based on how frequently the source data changes:
| Use Case | Recommended max-age | Rationale |
|---|---|---|
| Stock tickers, live scores | 60 (1 min) | Data changes frequently |
| Weather forecasts | 900 (15 min) | Updates every 15-30 minutes |
| Menu prices, inventory | 3600 (1 hour) | Changes a few times per day |
| Static content, logos | 86400 (24 hours) | Rarely changes |
Default Caching
- GET and HEAD requests: Cached for 60 seconds by default when no
Cache-Controlheader is provided by either the client or the upstream server - Other methods (POST, PUT, DELETE, PATCH): Never cached. Successful responses (2xx-3xx) from unsafe methods automatically invalidate all cached entries for that URL.
Cache-Control Priority
The cache TTL is determined by this priority chain:
- Upstream server
max-age— If the external server responds withCache-Control: max-age=N, that value is used - Client request
max-age— If the server doesn't specifymax-age, the value from yourproxy().fetch()request headers is used - Default TTL (60 seconds) — If neither provides
max-age, the platform uses a 60-second default
This means you can always extend the cache duration by setting max-age in your request, but the upstream server's directives take precedence when present. If the upstream server sends no-store or private, the response is never cached regardless of your request headers.
Cache Revalidation
The platform supports ETag and Last-Modified revalidation. When a cached response includes these headers, the platform sends conditional requests (If-None-Match / If-Modified-Since) to the upstream server. If the content hasn't changed, the server responds with 304 Not Modified and the cached response is reused — saving bandwidth across your fleet.
Controlling Cache with Headers
Control proxy caching using standard Cache-Control request headers:
// Cache for 1 hour — ideal for data that changes infrequently
const response = await proxy().fetch('https://api.example.com/menu-items', {
headers: { 'Cache-Control': 'max-age=3600' }
});
// Cache for 5 minutes
const response = await proxy().fetch('https://query1.finance.yahoo.com/v8/finance/chart/AAPL?interval=1d&range=5d', {
headers: { 'Cache-Control': 'max-age=300' }
});
// Force fresh fetch (bypass cache)
const response = await proxy().fetch('https://api.example.com/inventory', {
headers: { 'Cache-Control': 'no-cache' }
});Supported client request directives:
| Directive | Effect |
|---|---|
max-age=<seconds> | Set a custom cache duration (e.g., max-age=3600 for one hour) |
no-cache | Bypass cache and always fetch fresh from the upstream server |
no-store | Never cache the response |
Supported upstream server directives:
| Directive | Effect |
|---|---|
max-age=<seconds> | Sets cache TTL (takes priority over client max-age) |
no-store | Response is never cached |
private | Response is never cached (the proxy operates as a shared cache) |
Debugging Cache Behavior
The proxy adds an X-Cache header to responses:
X-Cache: HIT— Response was served from the platform cache- No
X-Cacheheader — Response was fetched from the upstream server
Use this to verify your caching strategy is working as expected during development.
Cache Limits
- Maximum cacheable response size: 8 MB. Responses larger than 8 MB are proxied through to the client but not cached.
- Cacheable status codes: 200, 203, 204, 300, 301, 304, 404, 405, 410, 414, 501. Other status codes are returned to the client but not stored in cache.
Vary Header Support
The proxy respects Vary headers from upstream servers. If the upstream includes Vary: Accept-Language, requests with different Accept-Language header values are cached separately. A response with Vary: * is never cached.
Next Steps
- CORS - Learn about CORS and why Proxy is needed
- Weather Methods - Weather data through the built-in TelemetryOS API (no proxy needed)
- Code Examples - Complete proxy integration examples
Updated about 3 hours ago