A list of puns related to "Doctest"
I still have no idea why are unitests more useful. It takes more time to write and has pretty much the same result as a doctest. Is there a simple example of how unitesting a code produces more informative results?
Thanks to @teryror, support for running doctests has recently landed in Miri. So finally cargo miri test
is executing all the same tests that cargo test
runs. This has been a long-standing open issue and I am stoked that it is now finally resolved. :)
The Miri submodule in rustc has been updated, so doctest support will appear in the rustup-distributed Miri with the next nightly release. If that causes trouble for you, e.g. because Miri actually complains about some of your doctests, you can use cargo miri test --all-targets
to run the other tests but not the doctests. If anything seems wrong, please report an issue. If you don't know what Miri is, our readme should help.
Next up: going over all the failing doctests in the standard library, and fixing them...
byexample is a tool that I made to overcome the shortcomings of Python's doctest. It allows you to execute snippets of code inside your documentation (blog post, tutorial, book, ...). In this way you can literally "execute" your docs and verify that they still up to date.
Around 2016 I was working on my final project (thesis-like) in Python and Javascript. I used doctest for testing/documenting Python but I couldn't find anything for Javascript and that how byexample born.
Now it supports snippets written in Ruby, Go, C++ and a few others languages but Python is the one with the best support (of course!).
It has a few years of development now but this the first time that I'm posting it in Reddit. Any feedback is welcome both here and in the Github project: https://github.com/byexamples/byexample
If you don't know what doctest is or you know but you it turned to be hard-to-debug/use, give byexample a try: at the end I created it to solve those issues!!
Thanks for your time!
https://i.redd.it/bch8v5c464h71.gif
I would like to write non-rust code (essentially ASCII art) into some of my function documentation, but when I do this by either indenting the "code" block or surrounding it with triple backticks, cargo test
complains that a bunch of my "tests" are failing:
/// The following is not supposed to be a doctest for `f`:
///
/// I am not a doctest, but a formal grammar for a language
/// (or something of the sort).
///
/// Oh, but it is. π
fn f () {}
How does one prevent literal blocks in doc comments from being interpreted as doctests?
On OS-X I can't run doctests over real code. I've added a custom-setup and test-suite inside the .cabal file, a custom Setup.hs file, and a generic test/doctest.hs file. When running (cabal build doctest ; .../doctest
) the linker hates things:
GHC runtime linker: fatal error: I found a duplicate definition for symbol
__ZN17double_conversion7BitCastIdyEET_RKT0_
whilst processing object file
/Users/tommd/.cabal/store/ghc-8.10.1/dbl-cnvrsn-2.0.2.0-d1095495/lib/libHSdbl-cnvrsn-2.0.2.0-d1095495.a
The symbol was previously defined in
/Users/tommd/.cabal/store/ghc-8.10.1/dbl-cnvrsn-2.0.2.0-d1095495/lib/libHSdbl-cnvrsn-2.0.2.0-d1095495.a(double-conversion.o)
This could be caused by:
* Loading two different object files which export the same symbol
* Specifying the same object file twice on the GHCi command line
* An incorrect `package.conf' entry, causing some object to be
loaded twice.
doctests: doctests: unable to load package `double-conversion-2.0.2.0'
Is this a common issue that is understood? I wasn't successful googling it last time I tried.
I'm really... >> REALLY << excited to announce version 2.3 which brings an extensible reporter system to doctest - the fastest C++ testing framework! There are 2 reporters which come with the framework by default:
--out=<filename>
)This has been a 10+ month saga since I began refactoring doctest so the framework could have a decent reporter system. Since then a lot of spaghetti code was untangled and the framework became more maintainable (+ moving to C++11) - without breaking the interface for users.
XML output is vital for testing frameworks so they can be integrated properly in Continuous Integration (CI) workflows and has been a showstopper for a number of organizations in adopting doctest.
Some more information:
IReporter
interface and you can "listen" for events such as "test case starting/ending".--reporters=<filters>
and all can be listed with --list-reporters
Once again
... keep reading on reddit β‘I've been trying out an implementation of doctests (as done in Python) for Emacs lisp code: https://github.com/riscy/doctest
These are two-liners that you insert in your documentation to illustrate how a function can be used, but which also double as simple unit tests. Does anyone know if something like this already exists for Emacs?
Python has doctests, which means you can write things in docstrings with a certain syntax and a test runner can treat them as assertions.
https://docs.python.org/3.7/library/doctest.html
It is with great pleasure that I am announcing the release of Doctest 2.0 - the fastest feature-rich C++11 single-header testing framework for unit tests and TDD!
The main 4 developments are:
The move to C++11 allowed me to greatly simplify the codebase:
Moving to C++11 also lead to droppin
... keep reading on reddit β‘I've seen the doctests documentation and even the PyCon 2020 YouTube channel has a video on doctests. However, I've never used doctests in my projects and I've never seen any GitHub or other projects that use doctests. What I usually see are various test frameworks like nose, pytest, or unittest.
So how common are doctests in reality? Have you ever used them? How beneficial are they compared to the various test frameworks?
Just migrated the unit tests for one of our projects from Boost.Test to Doctest and decided to share the time measurements while I still have the two branches at hand. You can see the number of the test cases in the run-time output below.
The tests were done on virtual machine with 4 cores and 8 GB of RAM.
The build was done with GCC 7.3 with -std='c++17' -O2 -Wall -Wextra -Werror -Winvalid-pch
and 4 threads for parallel building. The setup with Doctest has two additional flags:
-DDOCTEST_CONFIG_SUPER_FAST_ASSERTS -DDOCTEST_CONFIG_NO_COMPARISON_WARNING_SUPPRESSION
.
The Boost version that I'm using is 1.69 and the Doctest version is 2.3.2.
For the run-time tests, I ran them 3 times each and picked the slower ones. The timings for each version was in the same ballpark anyway.
Full rebuild, including the precompiled header, with Boost.Test
real Β Β 1m5.710s
user Β Β 3m42.350s
sys Β Β 0m5.751s
Full rebuild, including the precompiled header, with Doctest
real Β Β 0m52.567s
user Β Β 2m40.018s
sys Β Β 0m4.993s
Rebuild, without the precompiled header, with Boost.Test
real Β Β 1m2.391s
user Β Β 3m40.168s
sys Β Β 0m5.532s
Rebuild, without the precompiled header, with Doctest
real Β Β 0m46.351s
user Β Β 2m38.131s
sys Β Β 0m4.029s
Run-time with Boost.Test
time ./tests.bin -r short
Running 149 test cases...
Test module "p3_tests" has passed with:
Β 149 test cases out of 149 passed
Β 1166 assertions out of 1166 passed
real Β Β 0m0.056s
user Β Β 0m0.025s
sys Β Β 0m0.031s
Run-time with Doctest
time ./tests.bin
[doctest] doctest version is "2.3.2"
[doctest] run with "--help" for options
===============================================================================
[doctest] test cases: Β Β 149 | Β Β 149 passed | Β Β Β 0 failed | Β Β Β 0 skipped
[doctest] assertions: Β 1166 | Β 1166 passed | Β Β Β 0 failed |
[doctest] Status: SUCCESS!
real Β Β 0m0.042s
user Β Β 0m0.021s
sys Β Β 0m0.021s
I love doctest.
Writing runnable examples in my documentation is a joy:
-- |
-- >>> loeb [ subtract 1 . (!!1), subtract 2 . (!!2), length ]
-- [0,1,3]
loeb :: Functor f => f (f a -> a) -> f a
loeb f = x where x = fmap ($x) f
The one impedance point I run into sometime is that my example output is so big that it's not really legible.
data Rose = Rose [Rose] deriving Show
-- |
-- >>> rose 3
-- Rose [Rose [Rose [Rose []],Rose [Rose []]],Rose [Rose [Rose []],Rose [Rose []]],Rose [Rose [Rose []],Rose [Rose []]]]
rose :: Int -> Rose
rose n = Rose . replicate n . rose $ n - 1
I can make it more legible by adding some whitespace:
-- >>> rose 3
-- Rose
-- [ Rose
-- [ Rose
-- [ Rose []
-- ]
-- , Rose
-- [ Rose []
-- ]
-- ]
-- , Rose
-- [ Rose
-- [ Rose []
-- ]
-- , Rose
-- [ Rose []
-- ]
-- ]
-- , Rose
-- [ Rose
-- [ Rose []
-- ]
-- , Rose
-- [ Rose []
-- ]
-- ]
-- ]
But then doctest
fails that example
### Failure in Example.hs:6: expression `rose 3'
expected: Rose
[ Rose
[ Rose
[ Rose []
]
, Rose
[ Rose []
]
]
, Rose
[ Rose
[ Rose []
]
, Rose
[ Rose []
]
]
, Rose
[ Rose
[ Rose []
]
, Rose
[ Rose []
]
]
]
but got: Rose [Rose [Rose [Rose []],Rose [Rose []]],Rose [Rose [Rose []],Rose [Rose []]],Rose [Rose [Rose []],Rose [Rose []]]]
doctest
doesn't have a flag to make comparison of expected and actual example
output whitespace-agnostic, so what's a coder to do?
One low-effort solution to this problem is to take advantage of the interpreter's :set -interactive-print
option, which lets us choose an alternate printer.
For example, if I use the pretty-printer from pretty-show
instead, it
d
On 2016.05.22 I released version 1.0 of doctest. 1.1 brings the following major changes:
HUGE improvements in compile time of asserts - 70-95% faster than the original one!
wrote a FAQ - and the much requested differences with Catch section
MANY minor fixes - it didn't compile when using nullptr
in asserts, etc. - see the changelog
improved documentation
solved a very common problem of testing frameworks with self-registering tests inside static libraries - non-intrusive in pure CMake without any source code modification - see here
added code coverage of the library
You can see the full generated changelog here
The framework is in a very stable condition but there are still a lot of features in the roadmap.
I truly believe this framework can become the preferred choice for testing in C++ - the foundations are solid and pure and I believe there is nothing else quite like it - see the main readme.
The project is in need of sponsors and publicity!
I don't want this post to be a cry for money, but I would greatly appreciate any financial support by individuals or companies using (or wanting to use/support) the framework.
I created a patreon page for that purpose in addition to the pledgie campaign.
You can checkout the hackernews thread for this here.
Any feedback is welcome!
EDIT:
[Baptiste Wich
... keep reading on reddit β‘What is your test system of choice and for what reasons? Bonus points i you can point to an "ELI5"-like tutorial on your system of choice. ;-)
For anyone else out there who likes to use vim
when writing their haskell code, I want to take a minute to endorse the :make
command.
By default it runs make
in a subshell, but if you're averse to Makefile
s, you can easily configure vim to run some other command, like cabal
:
:set makeprg=cabal
I'll often set bindings to make recompiling, running or testing my code easy:
:let mapleader=" "
:nnoremap <Leader>b :make build<CR>
:nnoremap <Leader>r :make run<CR>
:nnoremap <Leader>t :make test<CR>
To me, the real benefit of :make
over just doing :!make
or :!cabal
is the post-processing vim does on the output of makeprg
, using 'errorformat'
to parse the output and jump to the the file/line/column mentioned in the error message.
Compiler reports an error? Vim takes me right to where the error message says the problem is.
The default 'errorformat'
handles error messages from cabal and ghc just fine, so there's no configuration required there.
When writing my haskell docs, I love using doctest to make sure my examples aren't lying. Doctest errors look a little different, though, and weren't getting successfully parsed for me.
For example, given two broken examples like:
-- Temp.hs
module Temp where
-- |
-- >>> foo "Unterminated string
-- "gnirts detanimretnU"
foo :: String -> String
foo = reverse
-- |
-- >>> bar
-- "Hello"
bar :: String
bar = "World"
doctest Temp.hs
reports:
### Failure in Temp.hs:4: expression `foo "Unterminated string'
expected: "\"gnirts detanimretnU\""
but got: ""
"\ESC[;1m<interactive>:23:25: \ESC[;1m\ESC[31merror:\ESC[0m\ESC[0m\ESC[;1m\ESC[0m\ESC[0m\ESC[;1m"
" lexical error in string/character literal at end of input\ESC[0m\ESC[0m"
"\ESC[0m\ESC[0m\ESC[0m"
### Failure in Temp.hs:10: expression `bar'
expected: "Hello"
but got: "World"
Examples: 2 Tried: 2 Errors: 0 Failures: 2
vim would incorrectly parse out the error locations as:
### Failure in Temp.hs
"<interactive>
"### Failure in Temp.hs
"To fix this, and enabl
... keep reading on reddit β‘I'm writing some docstrings for a Python, and some of them include examples in doctest format. I'm not using them as tests, just examples.
For those unfamiliar with the terminology, take a look at this:
def add(a, b):
"""Adds two numbers.
Example:
>>> add(2, 3)
5
>>> add(3,4)
7
"""
return a + b
The part with the >>> is the doctest part. It's meant to look exactly like a copy-pasted output of the python
interpreter. The lines beginning with >>>
are the code, and the lines following them are the results.
Is there a way (command or plugin) that updates/replaces those lines with the results?
For example, something that replaces:
>>> add(2, 3)
>>> add(3,4)
with
>>> add(2, 3)
5
>>> add(3,4)
7
and
>>> add(2, 3)
6
>>> add(3,4)
7
with
>>> add(2, 3)
5
>>> add(3,4)
7
I tried filtering selection encompassing the block through python, after removing all of the non-doctest lines with this command:
'<,'>!sed -e "/\s*>/\!d" -e "s/^\s*>>> //g" | python
but that only returns the results, like this:
5
7
Any ideas? I thought about starting a python interpreter instance in a tmux pane, piping the selection there and capturing the output, any better ideas (maybe without tmux)? Something with :terminal? A plugin?
For an assignment at school, we have to write a program which will print the day of the 10\10, 4\4 etc for each year, and submit doctests for it. My code is here: http://pastebin.com/kbShNP4P
But when I run doctest, I get this: doctest.testmod(Doomsday.doomsday(2012), verbose=True) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\python27\lib\doctest.py", line 1871, in testmod raise TypeError("testmod: module required; %r" % (m,)) TypeError: testmod: module required; 3
Please help, I can't seem to find any reason for this output even though the doctest command I used (doctest.testmod(Doomsday.doomsday(2012), verbose=True) was the same as my teacher's.
I'm happy to announce version 1.2 of doctest! The main changes are:
INFO("text: " << some_variable)
with lazy string creation - only when an assert fails later in the current scopeTEST_SUITE()
macro now works with blocks of codeCHECK(std::vector<int>{1, 2} == std::vector<int>{1, 3});
(only if variadic macros are enabled)There are 2 new important things for doctest since the release of version 2.0:
CHECK_THROWS_WITH(expr, exception_message);
for testing if an expression threw an exception containing a specific messageDOCTEST_CONFIG_SUPER_FAST_ASSERTS
config identifier now affects also the normal expression decomposing asserts such as CHECK(a == b);
resulting in even faster compile times for asserts without having to rewrite them to the binary form of CHECK_EQ(a, b);
- see the updated benchmarksRelease:
https://github.com/onqtam/doctest/releases/tag/2.2.0
Question: do you think DOCTEST_CONFIG_SUPER_FAST_ASSERTS
should be enabled by default?
Or is there a plan or project implementing doctests in go like https://docs.python.org/3.7/library/doctest.html ?
How to run docttest examples using `stack test` rather than the doctest command?
Just migrated the unit tests for one of our projects from Boost.Test to Doctest and decided to share the time measurements while I still have the two branches at hand. You can see the number of the test cases in the run-time output below.
The tests were done on virtual machine with 4 cores and 8 GB of RAM.
The build was done with GCC 7.3 with -std='c++17' -O2 -Wall -Wextra -Werror -Winvalid-pch
and 4 threads for parallel building. The setup with Doctest has two additional flags:
-DDOCTEST_CONFIG_SUPER_FAST_ASSERTS -DDOCTEST_CONFIG_NO_COMPARISON_WARNING_SUPPRESSION
.
The Boost version that I'm using is 1.69 and the Doctest version is 2.3.2.
For the run-time tests, I ran them 3 times each and picked the slower ones. The timings for each version was in the same ballpark anyway.
Full rebuild, including the precompiled header, with Boost.Test
real Β Β 1m5.710s
user Β Β 3m42.350s
sys Β Β 0m5.751s
Full rebuild, including the precompiled header, with Doctest
real Β Β 0m52.567s
user Β Β 2m40.018s
sys Β Β 0m4.993s
Rebuild, without the precompiled header, with Boost.Test
real Β Β 1m2.391s
user Β Β 3m40.168s
sys Β Β 0m5.532s
Rebuild, without the precompiled header, with Doctest
real Β Β 0m46.351s
user Β Β 2m38.131s
sys Β Β 0m4.029s
Run-time with Boost.Test
time ./tests.bin -r short
Running 149 test cases...
Test module "p3_tests" has passed with:
Β 149 test cases out of 149 passed
Β 1166 assertions out of 1166 passed
real Β Β 0m0.056s
user Β Β 0m0.025s
sys Β Β 0m0.031s
Run-time with Doctest
time ./tests.bin
[doctest] doctest version is "2.3.2"
[doctest] run with "--help" for options
===============================================================================
[doctest] test cases: Β Β 149 | Β Β 149 passed | Β Β Β 0 failed | Β Β Β 0 skipped
[doctest] assertions: Β 1167 | Β 1167 passed | Β Β Β 0 failed |
[doctest] Status: SUCCESS!
real Β Β 0m0.042s
user Β Β 0m0.021s
sys Β Β 0m0.021s
I released version 1.0 of doctest 4 months ago.
Today I'm happy to announce version 1.1 - which comes mainly with bugfixes and compile time improvements!
check out the /r/cpp thread for the discussion
Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.