Project report: Test infrastructure and property-based tests for Channels

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
2 messages Options
m
Reply | Threaded
Open this post in threaded view
|

Project report: Test infrastructure and property-based tests for Channels

m
Hi everyone,

I did some work on Channels' tests and testing infrastructure as part of the Mozilla grant. This is my report. 

Channels consists of five projects under the Django organization. First of all, I agreed with Andrew on versions of Django, Python and Twisted that we want to support across all of them. Then I updated Travis and tox configurations in all projects to be consistent with each other, and to test all agreed-upon version combinations. We also settled on specifying test dependencies in an extra in setup.py. 

This was important but unexciting work. The only useful takeaway might be that it confirmed my view that using Travis to run tox is an anti-pattern. It tends to slow down the test runs. I also have seen cases where tox didn't update the test environment in response to updated requirements - and those cases are hard to debug. Instead, I advocate on simplifying the test infrastructure so that there's almost no duplication between Travis and tox config - Makefiles or scripts can help with that in more complex cases.

The main idea of my proposal was to introduce tests that are-close-as-possible translations of the ASGI spec into code. I think this wouldn't have yielded much benefit if it weren't for using hypothesis to generate test data, aka "property based testing". I was previously impressed by the potential of it, and really saw it come to fruition here. Hypothesis was annoyingly good at finding edge cases. Think of distinction between passing in "None" for headers vs [], for using Unicode in query strings, or for making it clear that certain characters definitely aren't allowed e.g. in header names because they break the underlying HTTP protocol. Last but not least, I also introduced a "kitchen sink" (e.g. here) test for each scenario which basically lets hypothesis try all weird combinations by itself, which has highlighted at least one more issue.
In conclusion, hypothesis let me find corner cases that I wouldn't have thought of or would've been too lazy to try, and neatly separated test data generation from the actual tests. I vow to use it more in the future :)

While there's always more to test, I think we now have high confidence that Daphne does what it says, and any vague or wrong behavior are now well documented. This means that the spec has real meaning, and an alternative implementation should be possible without looking at Dahpne's source code. 

Working with Andrew was good - I'm grateful for the opportunity, achieved my goals, and he always replied promptly to my messages, even though he has a lot of other things on his plate. Thanks again, Andrew!

If you have any questions regarding my work (which you should find in pull requests by 'maikhoepfel') or the project itself, please feel free to ask here or reach out personally.

Cheers,

Maik

--
You received this message because you are subscribed to the Google Groups "Django developers (Contributions to Django itself)" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
To post to this group, send email to [hidden email].
Visit this group at https://groups.google.com/group/django-developers.
To view this discussion on the web visit https://groups.google.com/d/msgid/django-developers/3f8a4b27-6a9a-4173-a251-f35bc14a648f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Reply | Threaded
Open this post in threaded view
|

Re: Project report: Test infrastructure and property-based tests for Channels

Adam Johnson-2
Nice one! 👏

Happy to see more Hypothesis tests, I've had great experience with it too.

On 15 May 2017 at 11:54, <[hidden email]> wrote:
Hi everyone,

I did some work on Channels' tests and testing infrastructure as part of the Mozilla grant. This is my report. 

Channels consists of five projects under the Django organization. First of all, I agreed with Andrew on versions of Django, Python and Twisted that we want to support across all of them. Then I updated Travis and tox configurations in all projects to be consistent with each other, and to test all agreed-upon version combinations. We also settled on specifying test dependencies in an extra in setup.py. 

This was important but unexciting work. The only useful takeaway might be that it confirmed my view that using Travis to run tox is an anti-pattern. It tends to slow down the test runs. I also have seen cases where tox didn't update the test environment in response to updated requirements - and those cases are hard to debug. Instead, I advocate on simplifying the test infrastructure so that there's almost no duplication between Travis and tox config - Makefiles or scripts can help with that in more complex cases.

The main idea of my proposal was to introduce tests that are-close-as-possible translations of the ASGI spec into code. I think this wouldn't have yielded much benefit if it weren't for using hypothesis to generate test data, aka "property based testing". I was previously impressed by the potential of it, and really saw it come to fruition here. Hypothesis was annoyingly good at finding edge cases. Think of distinction between passing in "None" for headers vs [], for using Unicode in query strings, or for making it clear that certain characters definitely aren't allowed e.g. in header names because they break the underlying HTTP protocol. Last but not least, I also introduced a "kitchen sink" (e.g. here) test for each scenario which basically lets hypothesis try all weird combinations by itself, which has highlighted at least one more issue.
In conclusion, hypothesis let me find corner cases that I wouldn't have thought of or would've been too lazy to try, and neatly separated test data generation from the actual tests. I vow to use it more in the future :)

While there's always more to test, I think we now have high confidence that Daphne does what it says, and any vague or wrong behavior are now well documented. This means that the spec has real meaning, and an alternative implementation should be possible without looking at Dahpne's source code. 

Working with Andrew was good - I'm grateful for the opportunity, achieved my goals, and he always replied promptly to my messages, even though he has a lot of other things on his plate. Thanks again, Andrew!

If you have any questions regarding my work (which you should find in pull requests by 'maikhoepfel') or the project itself, please feel free to ask here or reach out personally.

Cheers,

Maik

--
You received this message because you are subscribed to the Google Groups "Django developers (Contributions to Django itself)" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
To post to this group, send email to [hidden email].
Visit this group at https://groups.google.com/group/django-developers.
To view this discussion on the web visit https://groups.google.com/d/msgid/django-developers/3f8a4b27-6a9a-4173-a251-f35bc14a648f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.



--
Adam

--
You received this message because you are subscribed to the Google Groups "Django developers (Contributions to Django itself)" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
To post to this group, send email to [hidden email].
Visit this group at https://groups.google.com/group/django-developers.
To view this discussion on the web visit https://groups.google.com/d/msgid/django-developers/CAMyDDM0cy7yXRmSuN%3DS3-UCQ3hFypcMnzCkDoD6A9LJR8hWMuw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.