From 803a56d6dfc2072230880e24c8fa78a7bc9012bc Mon Sep 17 00:00:00 2001 From: "Peter J. Holzer" Date: Fri, 2 Sep 2022 17:30:00 +0200 Subject: [PATCH] Think about authentication --- doc/authentication | 40 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 40 insertions(+) create mode 100644 doc/authentication diff --git a/doc/authentication b/doc/authentication new file mode 100644 index 0000000..94a1a71 --- /dev/null +++ b/doc/authentication @@ -0,0 +1,40 @@ +Nodes are expected to send data over the open internet. +Therefore we want some kind of authentication to avoid one node +impersonating another. +Basic auth would be a possibility +As would be SSL client auth +I'm somewhat leaning towards a simple hash based shared secret scheme, +although I'm not entirely sure why I prefer that over basic auth. Maybe: +* can be easily implemented in the backend +* allows overlapping keys, simplifyig key changes without potential for + data loss. +Anyway: +* Each node has an id +* Each node has a secret key +* The server also has a list of secret keys for each node +* The node includes "auth": {"node": node, "timestamp": timestamp, + "hmac": HMAC(key, node || timeetamp) } with every update + The server checks against all stored keys for node. + If there is no match fail + If the timestamp is <= the last recorded timestamp, fail. + (this prevents replay attacks but allows for some clock drift) + +Use JWT instead of HMAC? + ++ Public Key instead of shared secret possible (but that only helps if + the client signs the request, but then we need either a CA or + collect all public keys) +- If used as a token, there is no replay protection. +- Expiry time has to be set when creating a token. + +Doesn't seem compelling + +Timestamps: I think timestamps are problematic as a replay attack preventer. +While for a single process it is easy to ensure they are monotonically +increasing, for multiple processes this is not the case. Think of multiple +processes started by cron. They will be started almost simultaneously and - if +they are simple - also report their findings almost simultaneously. It is very +possible that process A gets its timestamp before process B, but B completes +its POST request first, invalidating A's timestamp. One possibility would be +that A then simply retries with a new timestamp. Another that the server stores +timestamps for some time and rejects only timestamps which are older than that.