Nowadays JSON is a very common choice for data interchange on the web. A lot of REST APIs use JSON-encoded body in their requests and responses. The default choice for dealing with JSON in Haskell is the aeson package. So it’s not surprising that most nontrivial Haskell projects depend on aeson
, directly or indirectly. Typeable heavily relies on aeson
too.
As a core package everyone depends on, aeson
is relatively well optimised. Yet there is always room for improvement. Given that even slight speedup in a core package will have a big impact on the whole community, we at Typeable decided to invest some time into aeson
performance. In this post we’ll describe one trick we applied to speedup the JSON parser in this pull request.
One of the most performance-critical parts of any JSON parser is string unescaping. The specification requires not printable characters, the double quote and the backslash to be escaped. E.g. the string a"b
is represented as "a\"b"
. Also, an arbitrary character can be represented using the Unicode code point, e.g. \u045E
represents the CYRILLIC SMALL LETTER SHORT U , namely ў
.
When parsing JSON, we have to unescape strings back. Doing it naïvely may introduce a performance bottleneck because unescaping is stateful: when scanning the string we need to remember whether the previous character was the backslash. But most of the strings in a typical JSON are not escaped, so the naïve implementation will make the typical case unnecessary slow. Here the slow path trick comes into play.
Let’s describe the trick in a more general setting. Suppose we have a complex and slow, but correct algorithm. And we have a simpler and faster algorithm, which covers only the typical case, but isn’t applicable for the general case. The trick is to pretend that only the typical case exists and apply the fast algorithm (the common path), but fallback to the slower one when necessary (the slow path). In the case of string unescaping in a JSON parser the common path is when unescaping is not needed, and the slow path actually unescapes the string.
Separating the common and the slow paths has two benefits. Since we apply a faster algorithm for the typical case, the whole performance improves. But there is also an interesting side effect. Often the common and the slow paths have a different set of trade-offs. If we mix both in one algorithm, then we have to sacrifice the general case performance in order to improve the typical case performance. As a result, we often get an implementation that is optimal neither for the general, nor for the typical case. Separating them, on the other hand, allows us to optimize the paths independently, improving performance even for the general case!
Let’s see how to apply the trick to the JSON parser in aeson
. The parser is implemented using the attoparsec package. Parser for strings looks like the following:
-- | Parse a string without a leading quote.
jstring_ :: Parser Text
jstring_ = do
(s, S _ escaped) <- A.runScanner startState go <* A.anyWord8
-- We escape only if there are
-- non-ascii (over 7bit) characters or backslash present.
if isTrue# escaped
then case unescapeText s of
Right r -> return r
Left err -> fail $ show err
else return (TE.decodeUtf8 s)
where
startState = S 0# 0#
go (S skip escaped) (W8# c)
| isTrue# skip = Just (S 0# escaped')
| isTrue# (w ==# 34#) = Nothing -- double quote
| otherwise = Just (S skip' escaped')
where
w = word2Int# c
skip' = w ==# 92# -- backslash
escaped' = escaped
`orI#` (w `andI#` 0x80# ==# 0x80#) -- c >= 0x80
`orI#` skip'
data S = S Int# Int#
(You can find the full code here)
The idea here is simple: we scan the input until we reach the closing quote, then we unescape the string. The skip
flag in the go
helper makes sure we won’t stop on an escaped double quote character.
The code is manually optimized, i.e. the go
helper manipulates unboxed values. Also, the code already tries to optimize for the common case: it applies the unescapeText
function only when unescaping is needed. That’s why we need the escaped
flag: we don’t need to unescape the string unless we saw the backslash or any non-ascii character in it.
There are two problems in the code above. We use the stateful scanner even for the typical case. Maintaining the skip
and escaped
flags takes CPU circles and slows down the parser even for strings that are not escaped.
Let’s apply the slow path trick described above and pretend that we never need to unescape strings. In that case, the algorithm is quite simple: we just take the input until we reach the closing double quote:
jstring_ = do
s <- P.takeWhile (\w -> w /= DOUBLE_QUOTE)
return (TE.decodeUtf8 s)
This code is much faster, though it fails when the string is escaped. We need a way to detect that we need to fallback. In this particular case it’s easy: we stop not only on the double quote, but also on the backslash and any non-ascii character. If we stopped on the double quote, then we are OK, otherwise we need to fallback:
jstring_ = do
s <- A.takeWhile (\w -> w /= DOUBLE_QUOTE && w /= BACKSLASH && not (testBit w 7))
w <- A.peekWord8
case w of
Just DOUBLE_QUOTE -> A.anyWord8 $> TE.decodeUtf8 s
_ -> slowPath s -- here we fallback to the general case
Here the slowPath
is basically a call to the original jstring_
function, modified to take the already consumed part of the input into account. Applying this optimization improves parser performance by 15%-45% depending on how many escaped strings the input contains.
As mentioned above, separating the slow path may open a possibility to improve the slow path performance. It’s true in our case too. Note that we don’t need the escaped
flag anymore because we already know that the string is escaped. It’s the slow path after all. Removing the flag eliminates the need for manual unboxing, now the idiomatic implementation is as fast as the hand-optimized one:
slowPath s' = do
s <- A.scan startState go <* A.anyWord8
case unescapeText (B.append s' s) of
Right r -> return r
Left err -> fail (show err)
where
startState = False
go a c
| a = Just False
| c == DOUBLE_QUOTE = Nothing
| otherwise = let a' = c == BACKSLASH
in Just a'
I didn’t noticed any speedup after this change, but at least it simplified the code and improved maintainability.
The slow path trick is not a silver bullet and can be applied only when certain conditions are met. Obviously, there should exist a typical case, which is common enough to make the optimization worthwhile. But also there should be a cheap way to detect that we need to fallback to the general algorithm, so that the check won’t slow down the common path too much.
Of course, the described slow path optimization trick is not new. I don’t remember where exactly I learned it from, but I’m sure it’s widely known. Yet people often overlook this simple but powerful optimization, so I think it’s beneficial to draw more attention to it. I hope you’ll spot a possibility to apply the trick the next time you’ll work on performance optimization.
The aeson
package is already optimized for performance, and it’s a challenge to find a space for improvement here. So I think 15%-45% speedup is something to be proud of. But we are not going to stop here. We are already working on another optimization, and our preliminary measurements are quite promising. So stay tuned!
Typeable OU ("us", "we", or "our") operates https://typeable.io (the "Site"). This page informs you of our policies regarding the collection, use and disclosure of Personal Information we receive from users of the Site.
We use your Personal Information only for providing and improving the Site. By using the Site, you agree to the collection and use of information in accordance with this policy.
While using our Site, we may ask you to provide us with certain personally identifiable information that can be used to contact or identify you. Personally identifiable information may include, but is not limited to your name ("Personal Information").
Like many site operators, we collect information that your browser sends whenever you visit our Site ("Log Data").
This Log Data may include information such as your computer's Internet Protocol ("IP") address, browser type, browser version, the pages of our Site that you visit, the time and date of your visit, the time spent on those pages and other statistics.
In addition, we may use third party services such as Google Analytics that collect, monitor and analyze this ...
Cookies are files with small amount of data, which may include an anonymous unique identifier. Cookies are sent to your browser from a web site and stored on your computer's hard drive.
Like many sites, we use "cookies" to collect information. You can instruct your browser to refuse all cookies or to indicate when a cookie is being sent. However, if you do not accept cookies, you may not be able to use some portions of our Site.
The security of your Personal Information is important to us, so we don't store any personal information and use third-party GDPR-compliant services to store contact data supplied with a "Contact Us" form and job applications data, suplied via "Careers" page.
This Privacy Policy is effective as of @@privacePolicyDate and will remain in effect except with respect to any changes in its provisions in the future, which will be in effect immediately after being posted on this page.
We reserve the right to update or change our Privacy Policy at any time and you should check this Privacy Policy periodically. Your continued use of the Service after we post any modifications to the Privacy Policy on this page will constitute your acknowledgment of the modifications and your consent to abide and be bound by the modified Privacy Policy.
If we make any material changes to this Privacy Policy, we will notify you either through the email address you have provided us, or by placing a prominent notice on our website.
If you have any questions about this Privacy Policy, please contact us.