Add research menu and relevant docs (#157)

This commit is contained in:
Jinho Jang 2024-02-15 22:38:16 +09:00 committed by GitHub
parent 9ef95e4207
commit c72dc2d62b
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
34 changed files with 605 additions and 4 deletions

View File

@ -76,6 +76,7 @@
"faucet",
"concat",
"certonly",
"txid",
"baarerstrasse",
"FDPIC",
],

View File

@ -30,7 +30,7 @@ yarn install
## Running Locally
```shell
yarn start
yarn start # Run 'node fetch-content.js' in the root directory to fetch remote files
```
Check for spelling errors before deploying:
@ -42,10 +42,10 @@ yarn check:spell
Create a production build locally to check for errors:
```shell
yarn build
yarn build # Runs 'node fetch-content.js' and then 'docusaurus build'
# The 'fetch-content.js' script fetches documents from the nwaku and research repositories.
# test the build
yarn serve
```

View File

@ -0,0 +1,20 @@
{ "words":
[
"pubsubtopic",
"jmeter",
"analyzed",
"queryc",
"wakudev",
"statusim",
"queryc",
"wakudev",
"statusim",
"chronos",
"libpqis",
"Conn",
"messageindex",
"storedat",
"pubsubtopic",
"wakudev"
]
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 201 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 130 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 128 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 58 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 101 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 98 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 57 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 114 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 72 KiB

View File

@ -0,0 +1,239 @@
---
title: PostgreSQL
description: Document that describes why Nim-Waku started to use Postgres and shows some benchmark and comparison results.
---
## Introduction
The *Nim Waku Node*, *nwaku*, has the capability of archiving messages until a certain limit (e.g. 30 days) so that other nodes can synchronize their message history throughout the *Store* protocol.
The *nwaku* originally used *SQLite* to archive messages but this has an impact on the node. *Nwaku* is single-threaded and therefore, any *SQLite* operation impacts the performance of other protocols, like *Relay.*
Therefore, the *Postgres* adoption is needed to enhance that.
[https://github.com/waku-org/nwaku/issues/1888](https://github.com/waku-org/nwaku/issues/1888)
## How to connect the *nwaku* to *Postgres*
Simply pass the next parameter to *nwaku*
```bash
--store-message-db-url="postgres://${POSTGRES_USER}:${POSTGRES_PASSWORD}@${POSTGRES_HOST}:${POSTGRES_PORT}/postgres
```
Notice that this only makes sense if the _nwaku_ has the _Store_ protocol mounted
```bash
--store=true
```
(start the _nwaku_ node with `--help` parameter for more _Store_ options)
## Examples of *nwaku* using *Postgres*
[https://github.com/waku-org/nwaku-compose](https://github.com/waku-org/nwaku-compose)
[https://github.com/waku-org/test-waku-query](https://github.com/waku-org/test-waku-query)
## Stress tests
The following repository was created as a tool to stress and compare performance between *nwaku*+*Postgres* and *nwaku*+*SQLite*:
[https://github.com/waku-org/test-waku-query](https://github.com/waku-org/test-waku-query)
### Insert test results
#### Maximum insert throughput
**Scenario**
- 1 node subscribed to pubsubtopic x and the *Store* protocol mounted.
- n nodes connected to the “store” node, and publishing messages simultaneously to pubsubtopic x.
- All nodes running locally in a *Dell Latitude 7640*.
- Each published message is fixed, 1.4 KB: [publish_one_client.sh](https://github.com/waku-org/test-waku-query/blob/master/sh/publish_one_client.sh)
- The next script is used to simulate multiple nodes publishing messages: [publish_multiple_clients.sh](https://github.com/waku-org/test-waku-query/blob/fe7061a21eb14395e723402face755c826077aec/sh/publish_multiple_clients.sh)
**Sought goal**
Find out the maximum number of concurrent inserts that both *SQLite* and *Postgres* could support, and check whether _Postgres_ behaves better than _SQLite_ or not.
**Conclusion**
Messages are lost after a certain threshold, and this message loss is due to limitations in the *Relay* protocol (GossipSub - libp2p.)
For example, if we set 30 nodes publishing 300 messages simultaneously, then 8997 rows were stored and not the expected 9000, in both *SQLite* and *Postgres* databases.
The reason why few messages were lost is because the message rate was higher than the *relay* protocol can support, and therefore a few messages were not stored. In this example, the test took 38.8, and therefore, the node was receiving 232 msgs/sec, which is much more than the normal rate a node will work with, which is ~10 msgs/sec (rate extracted from Grafanas stats for the *status.prod* fleet.)
As a conclusion, the bottleneck is within the *Relay* protocol itself and not the underlying databases. Or, in other words, both *SQLite* and *Postgres* can support the maximum insert rate a Waku node will operate within normal conditions.
### Query test results (jmeter)
In this case, we are comparing *Store* performance by means of Rest service.
**Scenario**
- node_a: one _nwaku_ node with *Store* and connected to *Postgres.*
- node_b: one _nwaku_ node with *Store* and using *SQLite*.
- Both *Postgres* and *SQLite* contain +1 million rows.
- node_c: one _nwaku_ node with *REST* enabled and acting as a *Store client* for node_a.
- node_d: one _nwaku_ node with *REST* enabled and acting as a *Store client* for node_b.
- With _jmeter_, 10 users make *REST* *Store* requests concurrently to each of the “rest” nodes (node_c and node_d.)
- All _nwaku_ nodes running statusteam/nim-waku:v0.19.0
[This](https://github.com/waku-org/test-waku-query/blob/master/docker/jmeter/http_store_requests.jmx) is the _jmeter_ project used.
![Using jmeter](imgs/using-jmeter.png)
*Results*
With this, the *node_b* brings a higher throughput than the *node_a* and that indicates that the node that uses SQLite performs better. The following shows the measures taken by _jmeter_ with regard to the REST requests.
![jmeter results](imgs/jmeter-results.png)
### Query test results (only Store protocol)
In this test suite, only the Store protocol is being analyzed, i.e. without REST. For that, a go-waku node is used, which acts as *Store* client. On the other hand, we have another go-waku app that publishes random *Relay* messages periodically. Therefore, this can be considered a more realistic approach.
The following diagram shows the topology used:
![Topology](imgs/topology-only-store-protocol.png)
For that, the next apps were used:
1. [Waku-publisher.](https://github.com/alrevuelta/waku-publisher/tree/9fb206c14a17dd37d20a9120022e86475ce0503f) This app can publish Relay messages with different numbers of clients
2. [Waku-store-query-generator](https://github.com/Ivansete-status/waku-store-query-generator/tree/19e6455537b6d44199cf0c8558480af5c6788b0d). This app is based on the Waku-publisher but in this case, it can spawn concurrent go-waku Store clients.
That topology is defined in [this](https://github.com/waku-org/test-waku-query/blob/7090cd125e739306357575730d0e54665c279670/docker/docker-compose-manual-binaries.yml) docker-compose file.
Notice that the two `nwaku` nodes run the very same version, which is compiled locally.
#### Comparing archive SQLite & Postgres performance in [nwaku-b6dd6899](https://github.com/waku-org/nwaku/tree/b6dd6899030ee628813dfd60ad1ad024345e7b41)
The next results were obtained by running the docker-compose-manual-binaries.yml from [test-waku-query-c078075](https://github.com/waku-org/test-waku-query/tree/c07807597faa781ae6c8c32eefdf48ecac03a7ba) in the sandbox machine (metal-01.he-eu-hel1.wakudev.misc.statusim.net.)
**Scenario 1**
**Store rate:** 1 user generating 1 store-req/sec.
**Relay rate:** 1 user generating 10msg/sec, 10KB each.
In this case, we can see that the SQLite performance is better regarding the store requests.
![Insert time distribution](imgs/insert-time-dist.png)
![Query time distribution](imgs/query-time-dist.png)
The following graph shows how the *SQLite* node has blocking periods whereas the *Postgres* always gives a steady rate.
![Num queries per minute](imgs/num-queries-per-minute.png)
**Scenario 2**
**Store rate:** 10 users generating 1 store-req/sec.
**Relay rate:** 1 user generating 10msg/sec, 10KB each.
In this case, is more evident that the *SQLite* performs better.
![Insert time distribution](imgs/insert-time-dist-2.png)
![Query time distribution](imgs/query-time-dist-2.png)
**Scenario 3**
**Store rate:** 25 users generating 1 store-req/sec.
**Relay rate:** 1 user generating 10msg/sec, 10KB each.
In this case, the performance is similar regarding the timings. The store rate is bigger in *SQLite* and *Postgres* keeps the same level as in scenario 2.
![Insert time distribution](imgs/insert-time-dist-3.png)
![Query time distribution](imgs/query-time-dist-3.png)
#### Comparing archive SQLite & Postgres performance in [nwaku-b452ed8](https://github.com/waku-org/nwaku/tree/b452ed865466a33b7f5b87fa937a8471b28e466e)
This nwaku commit is after a few **Postgres** optimizations were applied.
The next results were obtained by running the docker-compose-manual-binaries.yml from [test-waku-query-c078075](https://github.com/waku-org/test-waku-query/tree/c07807597faa781ae6c8c32eefdf48ecac03a7ba) in the sandbox machine (metal-01.he-eu-hel1.wakudev.misc.statusim.net.)
**Scenario 1**
**Store rate** 1 user generating 1 store-req/sec. Notice that the current Store query used generates pagination which provokes more subsequent queries than the 1 req/sec that would be expected without pagination.
**Relay rate:** 1 user generating 10msg/sec, 10KB each.
![Insert time distribution](imgs/insert-time-dist-4.png)
![Query time distribution](imgs/query-time-dist-4.png)
It cannot be appreciated but the average *****Store***** time was 11ms.
**Scenario 2**
**Store rate:** 10 users generating 1 store-req/sec. Notice that the current Store query used generates pagination which provokes more subsequent queries than the 10 req/sec that would be expected without pagination.
**Relay rate:** 1 user generating 10msg/sec, 10KB each.
![Insert time distribution](imgs/insert-time-dist-5.png)
![Query time distribution](imgs/query-time-dist-5.png)
**Scenario 3**
**Store rate:** 25 users generating 1 store-req/sec. Notice that the current Store query used generates pagination which provokes more subsequent queries than the 25 req/sec that would be expected without pagination.
**Relay rate:** 1 user generating 10msg/sec, 10KB each.
![Insert time distribution](imgs/insert-time-dist-6.png)
![Query time distribution](imgs/query-time-dist-6.png)
#### Conclusions
After comparing both systems, *SQLite* performs much better than *Postgres* However, a benefit of using *Postgres* is that it performs asynchronous operations, and therefore doesnt consume CPU time that would be better invested in *Relay* for example.
Remember that _nwaku_ is single-threaded and *chronos* performs orchestration among a bunch of async tasks, and therefore it is not a good practice to block the whole _nwaku_ process in a query, as happens with *SQLite*
After applying a few *Postgres* enhancements, it can be noticed that the use of concurrent *Store* queries doesnt go below the 250ms barrier. The reason for that is that most of the time is being consumed in [this point](https://github.com/waku-org/nwaku/blob/6da1aeec5370bb1c116509e770178cca2662b69c/waku/common/databases/db_postgres/dbconn.nim#L124). The `libpqisBusy()` function indicates that the connection is still busy even the queries finished.
Notice that we usually have a rate below 1100 req/minute in _status.prod_ fleet (checked November 7, 2023.)
-----------------------------
### Multiple nodes & one single database
This study aims to look for possible issues when having only one single database while several Waku nodes insert or retrieve data from it.
The following diagram shows the scenery used for such analysis.
![digram_multiple_nodes_one_database](imgs/digram_multiple_nodes_one_database.png)
There are three nim-waku nodes that are connected to the same database and all of them are trying to write messages to the same _PostgreSQL_ instance. With that, it is very common to see errors like:
```
ERR 2023-11-27 13:18:07.575+00:00 failed to insert message topics="waku archive" tid=2921 file=archive.nim:111 err="error in runStmt: error in dbConnQueryPrepared calling waitQueryToFinish: error in query: ERROR: duplicate key value violates unique constraint \"messageindex\"\nDETAIL: Key (storedat, id, pubsubtopic)=(1701091087417938405, 479c95bbf74222417abf76c7f9c480a6790e454374dc4f59bbb15ca183ce1abd, /waku/2/default-waku/proto) already exists.\n
```
The `db-postgres-hammer` is aimed to stress the database from the `select` point of view. It performs `N` concurrent `select` queries with a certain rate.
#### Results
The following results were obtained by using the sandbox machine (metal-01.he-eu-hel1.wakudev.misc) and running nim-waku nodes from https://github.com/waku-org/nwaku/tree/b452ed865466a33b7f5b87fa937a8471b28e466e and using the `test-waku-query` project from https://github.com/waku-org/test-waku-query/tree/fef29cea182cc744c7940abc6c96d38a68739356
The following shows the results
1. Two `nwaku-postgres-additional` inserting messages plus 50 `db-postgres-hammer` making 10 `selects` per second.
![Insert time distribution Postgres](imgs/insert-time-dist-postgres.png)
![Query time distribution Postgres](imgs/query-time-dist-postgres.png)
2. Five `nwaku-postgres-additional` inserting messages plus 50 `db-postgres-hammer` making 10 `selects` per second.
![Insert time distribution Postgres](imgs/insert-time-dist-postgres-2.png)
![Query time distribution Postgres](imgs/query-time-dist-postgres-2.png)
In this case, the insert time gets more spread because the insert operations are shared amongst five more nodes. The _Store_ query time remains the same on average.
3. Five `nwaku-postgres-additional` inserting messages plus 100 `db-postgres-hammer` making 10 `selects` per second.
This case is similar to 2. but stressing more the database.
![Insert time distribution Postgres](imgs/insert-time-dist-postgres-3.png)
![Query time distribution Postgres](imgs/query-time-dist-postgres-3.png)

View File

@ -0,0 +1,11 @@
{ "words":
[
"deanonymise",
"filecoin",
"hopr",
"incentivisation",
"ipfs",
"lightpush",
"waku"
]
}

View File

@ -0,0 +1,227 @@
---
title: Incentivisation
---
Waku is a family of decentralised communication protocols.
The Waku Network (TWN) consists of independent nodes running Waku protocols.
TWN needs incentivisation (shortened to i13n) to ensure proper node behaviour.
The goal of this document is to outline and contextualize our approach to TWN i13n.
After providing an overview of Waku and relevant prior work,
we focus on Waku Store - a client-server protocol for querying historical messages.
We introduce a minimal viable addition to Store to enable i13n,
and list research directions for future work.
# Incentivisation in decentralised networks
## Incentivisation tools
We can think of incentivisation tools as a two-by-two matrix:
- rewards vs punishment;
- monetary vs reputation.
In other words, there are four quadrants:
- monetary reward: the node gets rewarded;
- monetary punishment: the nodes deposits funds that are taken away (slashed) if it misbehaves;
- reputation reward: the node's reputation increases if it behaves well;
- reputation punishment: the node's reputation decreases if it behaves badly.
Reputation only works if high reputation brings tangible benefits.
For example, if nodes chose neighbors based on reputation, low-reputation nodes miss out on potential revenue.
Reputation scores may be local (a node assigns scores to its neighbors) or global (each node gets a uniform score).
Global reputation in its simplest form involves a trusted third party,
although decentralised approaches are also possible.
## Prior work
We may split incentivized decentralised networks into early file-sharing, blockchains, and decentralised storage.
### Early P2P file-sharing
Early P2P file-sharing networks employ reputation-based approaches and sticky defaults.
For instance, the BitTorrent protocol rewards uploading peers with faster downloads.
The download bandwidth available to a peer depends on how much it has uploaded.
Moreover, peers share pieces of a file before having received it in whole.
This non-monetary i13n policy has been proved to work in practice.
### Blockchains
Bitcoin has introduced proof-of-work (PoW) for native monetary rewards in a P2P network.
PoW miners are automatically assigned newly mined coins for generating blocks.
Miners must expend physical resources to generate a block.
If the block is invalid, these expenses are not compensated (implicit monetary punishment).
Proof-of-stake (PoS), used in Ethereum and many other cryptocurrencies, introduces explicit monetary punishments.
PoS validators lock up (stake) native tokens and get rewarded for validating blocks or slashed for misbehaviour.
### Decentralised storage
Post-Bitcoin decentralised storage networks include Codex, Storj, Sia, Filecoin, IPFS.
Their i13n methods combine techniques from early P2P file-sharing with blockchain-inspired reward mechanisms.
# Waku background
Waku is a [family of protocols](https://waku.org/about/architect) for a modular privacy-preserving censorship-resistant decentralised communication network.
The backbone of Waku is the Relay protocol (and its spam-protected version [RLN-Relay](https://rfc.vac.dev/spec/17/)).
Additionally, there are light protocols: Store, Filter, and Lightpush.
Light protocols are also referred to as client-server protocols and request-response protocols.
A server is a node running Relay and a server-side of at least one light protocol.
A client is a node running a client-side of any of the light protocols.
A server may sometimes be referred to as a full node, and a client as a light node.
There is no strict definition of a full node vs a light node in Waku (see [discussion](https://github.com/waku-org/research/issues/28)).
In light protocols, a client sends a request to a server, and a server performs some actions and returns a response:
- [Store](https://rfc.vac.dev/spec/13/): the server responds with messages relayed that match a set of criteria;
- [Filter](https://rfc.vac.dev/spec/12/): the server will relay (only) messages that pass a filter to the client;
- [Lightpush](https://rfc.vac.dev/spec/19/): the server publishes the client's message to the Relay network.
## Waku i13n challenges
Waku has no consensus and no native token, which brings it closer to reputation-incentivised file-sharing networks.
As of late 2023, Waku only operates under reputation-based rewards and punishments.
While [RLN-Relay](https://rfc.vac.dev/spec/17/) adds monetary punishments for spammers, slashing is yet to be activated.
Monetary rewards and punishments should ideally be atomically linked with the node's behaviour.
A benefit of blockchains in this respect is that the desired behaviour of miners or validators can be verified automatically.
Enforcing atomicity in a communication network is more challenging:
it is non-trivial to prove that a given piece of data has been relayed.
Our goal is to combine monetary and reputation-based incentives for Waku.
Monetary incentives have demonstrated their robustness in blockchains.
We think they are necessary to scale the network beyond the initial phase when it's maintained altruistically.
## Waku Store
Waku Store is a light protocol for querying historic messages that works as follows:
1. the client sends a `HistoryQuery` to the server;
2. the server sends a `HistoryResponse` to the client.
The response may be split into multiple parts, as specified by pagination parameters in `PagingInfo`.
We define a _relevant_ message as a message that matches client-defined criteria (e.g., relayed within a given time frame).
Upon receiving a request, a server should quickly send back a response containing all and only relevant messages.
# Waku Store incentivisation
An incentivised Store protocol has the following extra steps:
1. pricing:
1. cost calculation
2. price advertisement
3. price negotiation
2. payment:
1. payment itself
2. proof of payment
3. reputation
4. results cross-checking
In this document, we focus on the simplest proof-of-concept (PoC) i13n for Store.
Compared to the fully-fledged protocol, the PoC version is simplified in the following ways:
- cost calculation is based on a common-knowledge price;
- there is no price advertisement and no price negotiation;
- each query is paid for in a separate transaction, `txid` acts a proof of payment;
- the reputation system is simplified (see below);
- the results are not cross-checked.
In the PoC protocol:
1. the client calculates the price based on the known rate per hour of history;
2. the client pays the appropriate amount to the server's address;
3. the client sends a `HistoryQuery` to the server alongside the proof of payment (`txid`);
4. the server checks that the `txid` corresponds to a confirmed transaction with at least the required amount;
5. the server sends a `HistoryResponse` to the client.
In further subsections, we list the potential direction for future work towards a fully-fledged i13n mechanism.
## Pricing
For PoC, we assume a constant price per hour of history.
This price and the blockchain address of the server are assumed to be common knowledge.
This simplifies the client-server interaction, avoiding the price negotiation step.
In the future versions of the protocol, the price will be negotiated and will depend on multiple parameters,
such as the total size of the relevant messages in the response.
### Future work
- DoS protection - see https://github.com/waku-org/research/issues/66
- Cost calculation - see https://github.com/waku-org/research/issues/35
- Price advertisement - see https://github.com/waku-org/research/issues/51
- Price negotiation - see https://github.com/waku-org/research/issues/52
## Payment
For the PoC, each request is paid for with a separate transaction.
The transaction hash (`txid`) acts as a proof of payment.
The server verifies the payment by ensuring that:
1. the transaction has been confirmed;
2. the transaction is paying the proper amount to the server's account;
3. the `txid` does not correspond to any prior response.
The client gives proof of payment before it receives the response.
Other options could be:
1. the client pays after the fact;
2. the client pays partly upfront and partly after the fact;
3. a centralised third party (either trusted or semi-trusted, like a smart contract) ensures atomicity;
4. cryptographically ensured atomicity (similar to atomic swaps, Lightning, or Hopr).
Our design considerations are:
- the PoC protocol should be simple;
- servers are more "permanent" entities and are more likely to have long-lived identities;
- it is more important to protect the clients's privacy than the server's privacy.
In light of these criteria, we suggest that the client pays first.
This is simpler than splitting the payment, or involving an extra atomicity-enforcing mechanism.
Moreover, pre-payment is arguably more privacy-preserving than post-payment, which encourages servers to deanonymise clients to prevent fraud.
### Future work
- Add more payment methods - see https://github.com/waku-org/research/issues/58
- Design a subscription model with service credentials - see https://github.com/waku-org/research/issues/59
- Add privacy to service credentials - see https://github.com/waku-org/research/issues/60
- Consider the impact of network disruptions - see https://github.com/waku-org/research/issues/65
## Reputation
We use reputation to discourage the server from taking the payment and not responding.
The client keeps track of the server's reputation:
- all servers start with zero reputation points;
- if the server honours the request, it gets `+n` points;
- if the server does not respond before a timeout, it gets `-m` points.
- if the server's reputation drops below `k` points, the client will never query it again.
`n`, `m`, and `k` are subject to configuration.
Optionally, a client may treat a given server as trusted, assigning it a constant positive reputation.
Potential issues:
- An attacker can establish new server identities and continue running away with clients' money. Countermeasures:
- a client only queries trusted servers (which however leads to centralisation);
- when querying a new server, a client first sends a small (i.e. cheap) request as a test;
- more generally, the client selects a server on a case-by-case basis, weighing the payment amount against the server's reputation.
- The ban mechanism can theoretically be abused. For instance, a competitor may attack the victim server and cause the clients who were awaiting the response to ban that server. Countermeasure: prevent DoS-attacks.
### Future work
Design a more comprehensive reputation system:
- local reputation - see https://github.com/waku-org/research/issues/48
- global reputation - see https://github.com/waku-org/research/issues/49
## Results cross-checking
As there is no consensus over past messages, a client may want to query multiple servers and merge their responses.
Cross-checking helps ensure that servers are a) not censoring real messages; b) not injecting fake messages into history.
Cross-checking is absent in PoC but may be considered later.
### Future work
- Cross-checking the results against censorship - see https://github.com/waku-org/research/issues/57
- Use RLN to limit fake message insertion - see https://github.com/waku-org/research/issues/38
# Evaluation
We should think about what the success metrics for an incentivised protocol are, and how to measure them both in simulated settings, as well as in a live network.
# Longer-term future work
- Analyze privacy issues - see https://github.com/waku-org/research/issues/61
- Analyze decentralised storage protocols and their relevance e.g. as back-end storage for Store servers - see https://github.com/waku-org/research/issues/34
- Analyze the role of message senders, in particular, whether they should pay for sending non-ephemeral messages - see https://github.com/waku-org/research/issues/32
- Generalise incentivisation protocol to other Waku light protocols (Lightpush and Filter) - see https://github.com/waku-org/research/issues/67.

View File

@ -78,6 +78,12 @@ const config = {
sidebarId: "learn",
label: "Learn",
},
{
type: "docSidebar",
position: "left",
sidebarId: "research",
label: "Research",
},
{
href: "https://discord.waku.org",
position: "left",

79
fetch-content.js Normal file
View File

@ -0,0 +1,79 @@
const https = require('https');
const fs = require('fs');
const path = require('path');
async function fetchFromGitHub(url, callback) {
https.get(url, { headers: { 'User-Agent': 'Node.js' } }, (res) => {
let data = '';
res.on('data', (chunk) => {
data += chunk;
});
res.on('end', () => {
callback(null, JSON.parse(data));
});
}).on('error', (err) => {
callback(err, null);
});
}
async function fetchDirectoryContents(dirUrl, basePath, prefixToRemove) {
fetchFromGitHub(dirUrl, async (err, files) => {
if (err) {
console.error('Error fetching files:', err.message);
return;
}
for (const file of files) {
const relativePath = file.path.replace(new RegExp(`^${prefixToRemove}`), '');
const filePath = path.join(basePath, relativePath);
if (file.type === 'file') {
await downloadAndSaveFile(file.download_url, filePath);
} else if (file.type === 'dir') {
await fetchDirectoryContents(file.url, basePath, prefixToRemove); // Recursive call for subdirectories
}
}
});
}
async function downloadAndSaveFile(url, filePath) {
const fullFilePath = path.join(__dirname, filePath);
https.get(url, (res) => {
const directory = path.dirname(fullFilePath);
// Ensure the directory exists
fs.mkdirSync(directory, { recursive: true });
const fileStream = fs.createWriteStream(fullFilePath);
res.pipe(fileStream);
fileStream.on('finish', () => {
fileStream.close();
console.log('Downloaded and saved:', filePath);
});
}).on('error', (err) => {
console.error('Error downloading file:', err.message);
});
}
const repositories = [
{
baseUrl: 'https://api.github.com/repos/waku-org/nwaku/contents/docs/benchmarks',
baseSavePath: '/docs/research/benchmarks/',
prefixToRemove: 'docs/benchmarks/'
},
{
baseUrl: 'https://api.github.com/repos/waku-org/research/contents/docs',
baseSavePath: '/docs/research/research-and-studies/',
prefixToRemove: 'docs/'
}
];
fs.rmSync('docs/research/', { recursive: true, force: true });
repositories.forEach(repo => {
fetchDirectoryContents(repo.baseUrl, repo.baseSavePath, repo.prefixToRemove);
});

View File

@ -5,7 +5,7 @@
"scripts": {
"docusaurus": "docusaurus",
"start": "docusaurus start",
"build": "docusaurus build",
"build": "node fetch-content.js && docusaurus build",
"swizzle": "docusaurus swizzle",
"deploy": "docusaurus deploy",
"clear": "docusaurus clear",

View File

@ -85,6 +85,20 @@ const sidebars = {
"learn/waku-vs-libp2p",
"learn/glossary",
],
research: [
{
type: "category",
label: "Research and Studies",
collapsed: false,
items: ["research/research-and-studies/incentivisation"],
},
{
type: "category",
label: "Nwaku Benchmarks",
collapsed: false,
items: ["research/benchmarks/postgres-adoption"],
},
],
};
module.exports = sidebars;

View File

@ -32,3 +32,7 @@ html[data-theme="dark"] .header-github-link:before {
.hidden {
display: none;
}
.theme-doc-toc-desktop {
display: none;
}