Testing¶
We’re going to use pytest for testing, so let’s get that installed:
python3 -m pip install pytest
Testing the Parser¶
Let’s start with testing the parser itself, before we look at the server. There are a few files to set up:
1touch conftest.py
2mkdir tests
3touch tests/test_parser.py
Create the outline of the first test in tests/test_parser.py
:
1import pytest
2from server import server
3
4def test_valid_greeting_accepted():
5
6 assert True # stub for now
conftest.py
lets pytest
know this is the project root. It’s needed so that pytest
can resolve the from server import server
statement in test_parser.py
above.
Before we go any further, let’s make sure it’s working (we’ll ignore the tests that come with the skeleton - they’re out of sync with our implementation, and test things differently to what we’re going to cover):
1pytest --ignore=server/tests
2
3============== test session starts ==============
4platform linux -- Python 3.10.6, pytest-7.2.1, pluggy-1.0.0
5rootdir: /home/sfinnie/projects/helloLSP
6plugins: typeguard-2.13.3
7collected 1 item
8
9tests/test_parser.py . [100%]
10
11============== 1 passed in 0.03s ==============
Positive Tests¶
The parser is pretty simple but there’s that regular expression. We really want to make sure it’s accepting what we want, and rejecting what we don’t. Here’s the first test:
1import pytest
2from server import server
3
4
5def test_valid_greeting_accepted():
6
7 greeting = "Hello Thelma"
8 result = server._parse_greet(greeting)
9
10 assert result == []
_parse_greet()
returns a list of Diagnostic
entries, where each entry denotes an error. If the list is empty then there are no errors. We also need to check for a valid greeting that uses “Goodbye” as the salutation. Repeating the test is a bit wordy, but fortunately Pytest lets us parameterise the test:
1import pytest
2from server import server
3
4@pytest.mark.parametrize("greeting", [("Hello Thelma"), ("Goodbye Louise")])
5def test_valid_greeting_accepted(greeting):
6
7 result = server._parse_greet(greeting)
8
9 assert result == []
We’re now passing 2 test cases into the same function: “Hello Thelma” and “Goodbye Louise”. In both cases, we expect the answer to be an empty list.
Let’s run the tests:
1pytest --ignore=server/tests
2============== test session starts ==============
3platform linux -- Python 3.10.6, pytest-7.2.1, pluggy-1.0.0
4rootdir: /home/sfinnie/projects/helloLSP
5plugins: typeguard-2.13.3
6collected 2 items
7
8tests/test_parser.py .. [100%]
9
10============== 2 passed in 0.47s =================
All good. Note it says 2 tests passed: that confirms both test cases are being executed.
Negative Tests¶
We need to check the parser correctly identifies errors in invalid greetings. After all, that’s a big part of the value we want the server to provide: telling us where we’ve gone wrong. Here’s a first attempt:
1def test_invalid_greeting_rejected():
2
3 greeting = "Hell Thelma" # should be Hello, not Hell
4 result = server._parse_greet(greeting)
5
6 assert result != []
That’s fine, but it doesn’t check that the Diagnostic is correct. Let’s do that:
1@pytest.mark.parametrize("greeting", [("Wotcha Thelma"), ("Goodbye L0u1se"), ("Goodbye Louise again")])
2def test_invalid_greeting_rejected(greeting):
3
4 # when
5 result = server._parse_greet(greeting)
6
7 # then
8 assert len(result) == 1
9
10 diagnostic: Diagnostic = result[0]
11 assert diagnostic.message == "Greeting must be either 'Hello <name>' or 'Goodbye <name>'"
12
13 start: Position = diagnostic.range.start
14 end: Position = diagnostic.range.end
15
16 assert start.line == 0
17 assert start.character == 0
18 assert end.line == 0
19 assert end.character == len(greeting)
There’s a bit of a question about whether we should test the error message. The test is brittle, in that if we want to change the message, it means changing the text in two places. Arguably a better answer would be to have the message in a separate structure that both the parser function and test referred to. Against that, it’s a bit less readable. So, for now, we’ll leave as is.
The test has also been parameterised to cover some obvious failures. Are they enough? That depends. We could get smarter, for example using Hypothesis to generate test input rather than relying on 3 specific test cases. For now, though, the cases we have give sufficient confidence for the purpose here: we’re exploring building a language server, not best practice in test coverage.
Testing the Server¶
The parser is the core of the implementation so far. pygls
provides most of the server implementation, handling communication with the client including marshalling and interpreting the json-rpc
messages. There’s little value in re-testing pygls
here: it already has a solid set of tests.
However: it is worth testing that we’ve wired up the parser correctly. We saw above that we need the parser to be called on both textDocument/didOpen
and textDocument/didChange
. pygls
can’t ensure that for us. So there’s value in running some tests that ensure the full language server responds as expected in terms of json-rpc
messages sent and received.
Note
I’ve deliberately avoided calling these “unit” or “integration” tests. That’s a holy war that doesn’t want getting into here.
If we’re to test the server, there are a few pre-requisites we need to resolve:
How do we start the server, and know it’s started?
How do we send it messages, and receive the responses?
We could write this from first principles, constructing json-rpc
messages directly. That would be a lot of work though. It’s also unnecessary. The Language Server Protocol is symmetrical; so both client and server can send commands, receive responses and generate notifications. That means we can reuse the protocol implementation in pygls
. It essentially means using an instance of the pygls
server as a test client. pygls
does this itself for its own tests.
However, there’s also lsp-devtools. It provides pytest-lsp, a package that makes things a bit more convenient. See this discussion thread for some background.
Setup¶
First, we need to install pytest-lsp
:
python3 -m pip install pytest-lsp
Let’s also get rid of the tests in the original skeleton; we’re not using them, and they cause an error unless pytest is run with --ignore=server/tests
.
rm -rf server/tests
Parsing a valid file on opening¶
Now we can create some end-to-end tests in tests/test_server.py
. Here’s the setup and first test:
1import sys
2import pytest
3import pytest_lsp
4from pytest_lsp import ClientServerConfig
5
6from lsprotocol.types import TEXT_DOCUMENT_PUBLISH_DIAGNOSTICS
7
8@pytest_lsp.fixture(
9 config=ClientServerConfig(
10 server_command=[sys.executable, "-m", "server"],
11 root_uri="file:///path/to/test/project/root/"
12 ),
13)
14async def client():
15 pass
16
17
18@pytest.mark.asyncio
19async def test_parse_sucessful_on_file_open(client):
20 """Ensure that the server implements diagnostics correctly when a valid file is opened."""
21
22 test_uri = "file:///path/to/file.txt"
23 client.notify_did_open(
24 uri=test_uri, language="greet", contents="Hello Bob"
25 )
26
27 # Wait for the server to publish its diagnostics
28 await client.wait_for_notification(TEXT_DOCUMENT_PUBLISH_DIAGNOSTICS)
29
30 assert test_uri in client.diagnostics
31 assert len(client.diagnostics[test_uri]) == 0
The @pytest_lsp.fixture
annotation takes care of setting up the client, starting the server, and establishing communications between them. Note that the root_uri
paramter is set to a sample value, but that doesn’t matter - the server doesn’t actually read the contents of the file from disk. That’s a deliberate design feature of LSP: the client passes the actual content to the server using the protocol itself. That’s because the client “owns” the file being edited. Were the server to read it independently, it’s possible the client and server would have different views of the file’s contents due to e.g. file system caching. So the client passes the actual file contents to the server in the contents
parameter, as can be seen in the test:
24 uri=test_uri, language="greet", contents="Hello Bob"
Hello Bob
is a valid greeting, so we expect there to be no diagnostics returned. The test assertions check that.
Parsing an invalid file on opening¶
Now let’s ensure we do get diagnostics published if the file contents are invalid. Here’s the new test:
1@pytest.mark.asyncio
2async def test_parse_fail_on_file_open(client):
3 """Ensure that the server implements diagnostics correctly when an invalid file is opened."""
4
5 test_uri = "file:///path/to/file.txt"
6 client.notify_did_open(
7 uri=test_uri, language="greet", contents="Hello Bob1"
8 )
9
10 # Wait for the server to publish its diagnostics
11 await client.wait_for_notification(TEXT_DOCUMENT_PUBLISH_DIAGNOSTICS)
12
13 assert test_uri in client.diagnostics
14 assert len(client.diagnostics[test_uri]) == 1
15 assert client.diagnostics[test_uri][0].message == "Greeting must be either 'Hello <name>' or 'Goodbye <name>'"
It’s largely as before. The contents
param is now set to an invalid greeting (Hello Bob1
is invalid because numbers aren’t allowed in names). We now expect there to be a diagnostic published, so the length of the diagnostics array is 1. The same debate exists here on checking the actual text of the message. Again I’ve chose to replicate the text for simplicity of reading.
Ensuring file is parsed when changed¶
Remember that we want to parse the file when changed as well as when opened. That means another pair of tests, checking for successful & unsuccessful parsing of a changed file. Here’s the successful one:
1@pytest.mark.asyncio
2async def test_parse_sucessful_on_file_change(client):
3 """Ensure that the server implements diagnostics correctly when a file is changed and the updated contents are valid."""
4
5 # given
6 test_uri = "file:///path/to/file.txt"
7 client.notify_did_open(
8 uri=test_uri, language="greet", contents="Hello B0b"
9 )
10 # Get diagnostics from file open before notifying change
11 await client.wait_for_notification(TEXT_DOCUMENT_PUBLISH_DIAGNOSTICS)
12
13 # when
14 client.notify_did_change(
15 uri=test_uri, text="Hello Bob"
16 )
17 await client.wait_for_notification(TEXT_DOCUMENT_PUBLISH_DIAGNOSTICS)
18
19 # then
20 assert test_uri in client.diagnostics
21 assert len(client.diagnostics[test_uri]) == 0
The LSP says a file must be notified as open before it can be changed, hence the need for notify_did_open()
before calling notify_did_change()
. We await diagnostics from notify_did_open()
before invoking notify_did_change()
. That clears diagnostics on the server from opening (note the contents on opening are set to an invalid greeting before correcting in the change).
Here’s the final test: ensuring the correct diagnostic is published when a greeting is changed from valid to invalid:
1@pytest.mark.asyncio
2async def test_parse_fails_on_file_change(client):
3 """Ensure that the server implements diagnostics correctly when a file is changed and the updated contents are invalid."""
4
5 # given
6 test_uri = "file:///path/to/file.txt"
7 client.notify_did_open(
8 uri=test_uri, language="greet", contents="Hello Bob"
9 )
10 # Get diagnostics from file open before notifying change
11 await client.wait_for_notification(TEXT_DOCUMENT_PUBLISH_DIAGNOSTICS)
12
13 # when
14 client.notify_did_change(
15 uri=test_uri, text="Hello B0b"
16 )
17 await client.wait_for_notification(TEXT_DOCUMENT_PUBLISH_DIAGNOSTICS)
18
19 # then
20 assert test_uri in client.diagnostics
21 assert len(client.diagnostics[test_uri]) == 1
22 assert client.diagnostics[test_uri][0].message == "Greeting must be either 'Hello <name>' or 'Goodbye <name>'"
Wrapping up¶
We now have some end-to-end tests that check parsing works correctly, both on initial open and on change. We’re not checking all the permutations of parsing because that’s covered in the parser tests we created first.
The code at this point is tagged as v0.3.