We provide an example for the non-monotonicity of robustness and improve our experimental evaluation, which now includes a comparison between encodings and against a recently published gradient descent–based method for quantized networks.
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@booklet{EasyChair:1000,
author = {Mirco Giacobbe and Thomas A. Henzinger and Mathias Lechner},
title = {How Many Bits Does it Take to Quantize Your Neural Network?},
howpublished = {EasyChair Preprint 1000},
year = {EasyChair, 2019}}