-
-
Save CJEnright/bc2d8b8dc0c1389a9feeddb110f822d7 to your computer and use it in GitHub Desktop.
| package main | |
| import ( | |
| "net/http" | |
| "compress/gzip" | |
| "io/ioutil" | |
| "strings" | |
| "sync" | |
| "io" | |
| ) | |
| var gzPool = sync.Pool { | |
| New: func() interface{} { | |
| w := gzip.NewWriter(ioutil.Discard) | |
| return w | |
| }, | |
| } | |
| type gzipResponseWriter struct { | |
| io.Writer | |
| http.ResponseWriter | |
| } | |
| func (w *gzipResponseWriter) WriteHeader(status int) { | |
| w.Header().Del("Content-Length") | |
| w.ResponseWriter.WriteHeader(status) | |
| } | |
| func (w *gzipResponseWriter) Write(b []byte) (int, error) { | |
| return w.Writer.Write(b) | |
| } | |
| func Gzip(next http.Handler) http.Handler { | |
| return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { | |
| if !strings.Contains(r.Header.Get("Accept-Encoding"), "gzip") { | |
| next.ServeHTTP(w, r) | |
| return | |
| } | |
| w.Header().Set("Content-Encoding", "gzip") | |
| gz := gzPool.Get().(*gzip.Writer) | |
| defer gzPool.Put(gz) | |
| gz.Reset(w) | |
| defer gz.Close() | |
| next.ServeHTTP(&gzipResponseWriter{ResponseWriter: w, Writer: gz}, r) | |
| }) | |
| } |
great, thanks
thanks
First of all, thanks for a great solution. It really helped me out. However, I was running into a problem where the compressed response was being "ignored" because it breached the buffer-before-chunking threshold (2KB at the time of writing this) and it was being chunked. It meant my response jumped from 2 KB to 15 KB.
To solve it, I updated the GZip handler to do the following instead of lines 45 & 46. I'm open to any feedback to make this better if this will cause too much GC headache.
var b bytes.Buffer
gz.Reset(&b)
defer func() {
gz.Close()
w.Header().Set("Content-Length", fmt.Sprint(len(b.Bytes())))
w.Write(b.Bytes())
}()
Another issue - if applied to e.g. Prometheus HTTP handlers, it will lead to double-gzipping. To avoid it, strip Accept-Encoding from the request before passing it down the handler chain.
r.Header.Del("Accept-Encoding") // prevent double-gzipping
next.ServeHTTP(&gzipResponseWriter{ResponseWriter: w, Writer: gz}, r)
})
}
See also github.com/NYTimes/gziphandler.
Created a package for this: https://github.com/TelephoneTan/GoHTTPGzipServer
this code doesn't work for me... instead of showing the content it transfers an empty file
I think the defer Close line should be before the defer gzpool.Put. To prevent any writes going to the writer before being put back on the pool.
I did a little benchmarking and I didn't see any improvement using sync pool. It was actually slower.
I did a little benchmarking and I didn't see any improvement using sync pool. It was actually slower.
Implementing a sync mechanism probably will make your code slower, but more memory efficient since you don't allocate a new gzip writer on every request.
nice, what is the license for this code ?