Finish text for cache timing
This commit is contained in:
parent
a627324a58
commit
b8d6f92bc6
@ -735,7 +735,7 @@ caches, or imply a substantial change in user agent design.
|
||||
\citeauthor{feltenTimingAttacksWeb2000} \cite{feltenTimingAttacksWeb2000} were
|
||||
the first to conduct a study on the feasibility of cache timing attacks and
|
||||
concluded that accuracy in determining whether a file has been loaded from cache
|
||||
or downloaded from a server is generally very high ($>95\%$). Furthermore, they
|
||||
or downloaded from a server is generally very high ($>95$\%). Furthermore, they
|
||||
evaluated a host of countermeasures such as turning off caching, altering hit or
|
||||
miss performance and turning off Java and JavaScript but concluded that they
|
||||
were unattractive or at worst ineffective. They propose a partial remedy for
|
||||
@ -753,12 +753,34 @@ tagging would effectively nullify the performance boost a \gls{CDN} provides by
|
||||
converting every cache hit into a cache miss. The authors themselves question
|
||||
the effectiveness of such an approach.
|
||||
|
||||
While the attack presented by \citeauthor{feltenTimingAttacksWeb2000} relies on
|
||||
being able to accurately time resource loading, a reliable network is needed.
|
||||
Because the attack presented by \citeauthor{feltenTimingAttacksWeb2000} relies
|
||||
on being able to accurately time resource loading, a reliable network is needed.
|
||||
Today a sizeable portion of internet activity comes from mobile devices which
|
||||
are often not connected via cable but wirelessly.
|
||||
\citeauthor{vangoethemClockStillTicking2015}
|
||||
\cite{vangoethemClockStillTicking2015} have therefore proposed four new methods
|
||||
to accurately time resource loading over unstable networks. By using these
|
||||
improved methods, they managed to determine whether a user is a member of a
|
||||
particular age group (in this case between 23 and 32). The authors also ran
|
||||
their attacks against other social networks (LinkedIn, Twitter, Google and
|
||||
Amazon), successfully extracting sensitive information on users. The research
|
||||
discussed so far has not tackled the problem through a quantitative perspective
|
||||
but instead focused on individual cases. Due to this missing piece,
|
||||
\citeauthor{sanchez-rolaBakingTimerPrivacyAnalysis2019}
|
||||
\cite{sanchez-rolaBakingTimerPrivacyAnalysis2019} conducted a survey on 10K
|
||||
websites to determine how feasible it is to perform a history sniffing attack on
|
||||
a large scale. Their tool \textsc{BakingTimer} collects timing information on
|
||||
\gls{HTTP} requests, checking for logged in status and sensitive data. Their
|
||||
results show that 71.07\% of the surveyed websites are vulnerable to the
|
||||
attack.
|
||||
|
||||
\subsection{Cache Control Directives}
|
||||
\label{subsec:cache control directives}
|
||||
|
||||
Cache Control Directives can be supplied in the Cache-Control \gls{HTTP} header,
|
||||
allowing rules about storing, updating and deletion of resources in the cache to
|
||||
be defined.
|
||||
|
||||
\subsection{DNS Cache}
|
||||
\label{subsec:dns cache}
|
||||
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user