In this paper we propose a novel logarithmic quantization-based DNN (Deep Neural Network) architecture for depthwise separable convolution (DSC) networks. Our architecture is based on selective two-word logarithmic quantization (STLQ), which improves accuracy greatly over logarithmic-scale quantization while retaining the speed and area advantage of logarithmic quantization. On the other hand, it also comes with the synchronization problem due to variable-latency PEs (processing elements), which we address through a novel architecture and a compile-time optimization technique. Our architecture is dynamically reconfigurable to support various combinations of depthwise vs. pointwise convolution layers efficiently. Our experimental results usi...