Premier commit déjà bien avancé

This commit is contained in:
2025-11-10 18:33:24 +01:00
commit db4f0508cb
652 changed files with 440521 additions and 0 deletions

0
notes/.gitkeep Normal file
View File

9
notes/Poppy-test.md Normal file
View File

@ -0,0 +1,9 @@
---
title: Poppy Test
date: 10-11-2025
last_modified: 10-11-2025:18:08
---
# Poppy Test
Commencez à écrire votre note ici...

View File

@ -0,0 +1,292 @@
---
title: Bienvenue dans Project Notes
date: 08-11-2025
last_modified: 09-11-2025:01:13
tags:
- aide
- documentation
- tutorial
---
08/11/2025 -
C'est mon application de prise de note
## J'espére qu'elle va bien marcher
# Bienvenue dans Project Notes
Bienvenue dans votre application de prise de notes en Markdown ! Cette page vous explique comment utiliser l'application et le format front matter.
## Qu'est-ce que le Front Matter ?
Le **front matter** est un bloc de métadonnées en YAML placé au début de chaque note, entre deux lignes `---`. Il permet d'ajouter des informations structurées à vos notes.
### Format du Front Matter
```yaml
---
title: Titre de votre note
date: 08-11-2025
last_modified: 08-11-2025:14:10
tags: [projet, urgent, backend]
---
```
### Champs disponibles
- **title** : Le titre de votre note (généré automatiquement depuis le nom du fichier)
- **date** : Date de création (format: JJ-MM-AAAA)
- **last_modified** : Dernière modification (format: JJ-MM-AAAA:HH:MM) - mis à jour automatiquement
- **tags** : Liste de tags pour organiser et rechercher vos notes
### Exemples de tags
Vous pouvez écrire vos tags de deux façons :
```yaml
# Format inline
tags: [projet, urgent, backend, api]
# Format liste
tags:
- projet
- urgent
- backend
- api
```
Les tags sont indexés et permettent de rechercher vos notes via la barre de recherche.
## Guide Markdown
### Titres
```markdown
# Titre niveau 1
## Titre niveau 2
### Titre niveau 3
```
### Emphase
```markdown
*italique* ou _italique_
**gras** ou __gras__
***gras et italique***
~~barré~~
```
Rendu : *italique*, **gras**, ***gras et italique***
### Listes
#### Liste non ordonnée
```markdown
- Élément 1
- Élément 2
- Sous-élément 2.1
- Sous-élément 2.2
- Élément 3
```
Rendu :
- Élément 1
- Élément 2
- Sous-élément 2.1
- Sous-élément 2.2
- Élément 3
#### Liste ordonnée
```markdown
1. Premier élément
2. Deuxième élément
3. Troisième élément
```
Rendu :
1. Premier élément
2. Deuxième élément
3. Troisième élément
### Liens et Images
```markdown
[Texte du lien](https://example.com)
![Texte alternatif](url-de-image.jpg)
```
Exemple : [Documentation Markdown](https://www.markdownguide.org/)
### Code
#### Code inline
Utilisez des backticks : `code inline`
#### Bloc de code
```markdown
```javascript
function hello() {
console.log("Hello World!");
}
```
```
Rendu :
```javascript
function hello() {
console.log("Hello World!");
}
```
### Citations
```markdown
> Ceci est une citation
> sur plusieurs lignes
```
Rendu :
> Ceci est une citation
> sur plusieurs lignes
### Tableaux
```markdown
| Colonne 1 | Colonne 2 | Colonne 3 |
|-----------|-----------|-----------|
| Ligne 1 | Données | Données |
| Ligne 2 | Données | Données |
```
Rendu :
| Colonne 1 | Colonne 2 | Colonne 3 |
|-----------|-----------|-----------|
| Ligne 1 | Données | Données |
| Ligne 2 | Données | Données |
### Séparateurs
```markdown
---
```
Rendu :
---
## Commandes Slash
Utilisez le caractère `/` au début d'une ligne pour accéder aux commandes rapides :
- `/h1`, `/h2`, `/h3` - Titres
- `/list` - Liste à puces
- `/date` - Insérer la date du jour
- `/link` - Créer un lien
- `/bold` - Texte en gras
- `/italic` - Texte en italique
- `/code` - Code inline
- `/codeblock` - Bloc de code
- `/quote` - Citation
- `/hr` - Ligne de séparation
- `/table` - Créer un tableau
**Navigation** : Utilisez les flèches ↑↓ pour naviguer, Entrée ou Tab pour insérer, Échap pour annuler.
## Raccourcis et Astuces
### Créer une note
Cliquez sur le bouton **✨ Nouvelle note** dans l'en-tête. Si la note existe déjà, elle sera ouverte, sinon elle sera créée.
### Rechercher des notes
Utilisez la barre de recherche en haut pour filtrer vos notes par tags. La recherche est mise à jour en temps réel.
### Sauvegarder
Cliquez sur le bouton **💾 Enregistrer** pour sauvegarder vos modifications. Le champ `last_modified` du front matter sera automatiquement mis à jour.
### Supprimer une note
Cliquez sur l'icône 🗑️ à côté du nom de la note dans la sidebar.
## Organisation avec les tags
Les tags sont un excellent moyen d'organiser vos notes. Voici quelques suggestions :
- **Par projet** : `projet-notes`, `projet-api`, `projet-frontend`
- **Par priorité** : `urgent`, `important`, `backlog`
- **Par type** : `documentation`, `tutorial`, `meeting`, `todo`
- **Par technologie** : `javascript`, `go`, `python`, `docker`
- **Par statut** : `en-cours`, `terminé`, `archive`
## Exemple complet
Voici un exemple de note complète :
```markdown
---
title: Réunion API Backend
date: 08-11-2025
last_modified: 08-11-2025:15:30
tags: [meeting, backend, api, urgent]
---
# Réunion API Backend
## Participants
- Alice (Lead Dev)
- Bob (Backend)
- Charlie (Frontend)
## Points discutés
### 1. Architecture de l'API
Nous avons décidé d'utiliser une architecture REST avec les endpoints suivants :
- `GET /api/notes` - Liste toutes les notes
- `POST /api/notes` - Créer une note
- `PUT /api/notes/:id` - Modifier une note
- `DELETE /api/notes/:id` - Supprimer une note
### 2. Authentification
> Utilisation de JWT pour l'authentification
Code d'exemple :
```go
func generateToken(userID string) (string, error) {
// Implementation
}
```
### 3. Prochaines étapes
- [ ] Implémenter les endpoints
- [ ] Écrire les tests
- [ ] Documentation API
## Actions
| Qui | Action | Deadline |
|---------|---------------------|------------|
| Bob | Endpoints API | 15-11-2025 |
| Charlie | Interface Frontend | 20-11-2025 |
| Alice | Review & Deploy | 25-11-2025 |
```
---
Bonne prise de notes ! 📝

361
notes/meetings/export.md Normal file
View File

@ -0,0 +1,361 @@
---
title: export.md
date: 08-11-2025
last_modified: 09-11-2025:01:15
---
# How to remplace Chord IP on a Storage node/S3C cluster.
## Prech checks
> **note** Note
> - Ring should be Green on META and DATA
> - S3C should be Green and Metadata correctly synced
- Check the server Name in federation
```bash
cd /srv/scality/s3/s3-offline/federation/
cat env/s3config/inventory
```
- Run a backup of the config files for all nodes
```bash
salt '*' cmd.run "scality-backup -b /var/lib/scality/backup"
```
- Check ElasticSearch Status (from the supervisor)
```bash
curl -Ls http://localhost:4443/api/v0.1/es_proxy/_cluster/health?pretty
```
- Check the status of Metadata S3C :
```bash
cd /srv/scality/s3/s3-offline/federation/
./ansible-playbook -i env/s3config/inventory tooling-playbooks/gather-metadata-status.yml
```
If you have SOFS Connectors check Zookeeper status
- Set variables :
```bash
OLDIP="X.X.X.X"
NEWIP="X.X.X.X"
RING=DATA
```
## Stop the Ring internal jobs :
- From the supervisor, disable auto purge, auto join, auto_rebuild :
```bash
for RING in $(ringsh supervisor ringList); do \
ringsh supervisor ringConfigSet ${RING} join_auto 0; \
ringsh supervisor ringConfigSet ${RING} rebuild_auto 0; \
ringsh supervisor ringConfigSet ${RING} chordpurge_enable 0; \
done
```
- `Leave the node from the UI` or with this loop
```bash
SERVER=myservername (adapt with the correct name)
for NODE in \
$(for RING in $(ringsh supervisor ringList); do \
ringsh supervisor ringStatus ${RING} | \
grep 'Node: ' | \
grep -w ${SERVER} | \
cut -d ' ' -f 3 ;\
done); \
do \
echo ringsh supervisor nodeLeave ${NODE/:/ } ;\
done
```
### Stop the Storage node services :
> **note** Note
> From the storage node
- Identify the roles of the server :
```bash
salt-call grains.get roles
```
Stop all the services
```bash
systemctl disable --now scality-node scality-sagentd scality-srebuildd scality-sophiactl elasticsearch.service
```
Stop S3C :
```bash
systemctl stop s3c@*
crictl ps -a
systemctl disable containerd.service
```
If the node is also ROLE_PROM / ROLE_ELASTIC / ROLE_ZK :
```bash
systemctl stop prometheus
```
**NOW CHANGE THE IP ON THE NODE :**
### Change the IP adress on the supervisor config files :
> **note** Note
> From the supervisor
- Check the ssh connection manually and restart salt-minion
```
systemctl restart salt-minion
```
```bash
Remove / Accept new salt minion KEY
salt-key -d $SERVER
salt-key -L
salt-key -A
```
- Update the `plateform_description.csv` with the new ip
- Regenerate the pillar
- Replace the ip on `/etc/salt/roster`
Replace every instance of the OLDIP with the NEWIP in Salt Pillar config files:
```bash
#/srv/scality/bin/bootstrap -d /root/scality/myplatform.csv --only-pillar -t $SERVER
vim /srv/scality/pillar/scality-common.sls
vim /srv/scality/pillar/{{server}}.sls
salt '*' saltutil.refresh_pillar
salt '*' saltutil.sync_all refresh=True
```
- Check
```bash
grep $OLDIP /srv/scality/pillar/*
```
## RING : Change IP on the Scality-node config.
> **note** Note
> From the storage node
#### Storage node :
- Check the config file :
`cat /etc/scality/node/nodes.conf`
Then change the IP !
```bash
Run a 'dry-run' with -d
/usr/bin/scality-update-chord-ip -n $NEWIP -d
/usr/bin/scality-update-chord-ip -n $NEWIP
/usr/bin/scality-update-node-ip -n $NEWIP -d
/usr/bin/scality-update-node-ip -n $NEWIP
```
- Check the config file after the IP change :
`cat /etc/scality/node/nodes.conf`
#### Srebuildd :
> **note** Note
> FROM THE SUPERVISOR
```
# Target all the storage node.
salt -G 'roles:ROLE_STORE' state.sls scality.srebuildd.configured
```
Check with a grep :
```
salt -G 'roles:ROLE_STORE' cmd.run "grep $OLDIP /etc/scality/srebuildd.conf"
salt -G 'roles:ROLE_STORE' cmd.run "grep $NEWIP /etc/scality/srebuildd.conf"
```
If is still there after the salt state run a sed/replace to get ride of it :
```bash
salt -G 'roles:ROLE_STORE' cmd.run 'sed -i.bak-$(date +"%Y-%m-%d") 's/${OLDIP}/${NEWIP}/' /etc/scality/srebuildd.conf'
```
Check :
```
salt -G 'roles:ROLE_STORE' cmd.run "grep $OLDIP /etc/scality/srebuildd.conf"
```
Restart srebuildd
```
salt -G 'roles:ROLE_STORE' service.restart scality-srebuildd
```
### ElasticSearch :
Redeploy Elastic topology if the node was a ES_ROLE :
```
salt -G 'roles:ROLE_ELASTIC' state.sls scality.elasticsearch.advertised
salt -G 'roles:ROLE_ELASTIC' state.sls scality.elasticsearch
```
#### Sagentd :
> **note** Note
> From the storage node
```bash
salt-call state.sls scality.sagentd.registered
```
- Check with `cat /etc/scality/sagentd.yaml`
### ringsh-conf check
It seems `ringsh show conf` uses store1 to talk to the Ring probably IP has to be changed :
```
ringsh show conf
ringsh supervisor serverList
```
Restart Scality services.
```bash
systemctl enable --now scality-node scality-sagentd scality-srebuildd
```
Now supervisor should be on the UI with the New IP.
If not change the IP on the storage node as explained below :
> **note** Note
> Probably deprecated .... not to be done.
From the supervisor GUI ([http:/](http:)/<IP>/gui), go to server and delete the server which should be red.
From the same page, add a new server and enter the name + new IP.
From the terminal, check that the new server appear and is **online**
As this point storage node is supposed to be back to the Ring with NEW IP.
A bit a bruteforce to check on other servers :
```
# salt '*' cmd.run "grep -rw $OLDIP /etc/"
```
### Restart scality process
```bash
systemctl enable --now scality-node scality-sagentd scality-srebuildd elasticsearch.service
for RING in $(ringsh supervisor ringList); do echo " #### $RING ####"; ringsh supervisor ringStorage $RING; ringsh supervisor ringStatus $RING; done
ringsh supervisor nodeJoinAll DATA
for RING in $(ringsh supervisor ringList); do \
ringsh supervisor ringConfigSet ${RING} join_auto 2; \
ringsh supervisor ringConfigSet ${RING} rebuild_auto 1; \
ringsh supervisor ringConfigSet ${RING} chordpurge_enable 1; \
done
```
### Update SUPAPI DB
Vérifier l'UI, sinon :
```bash
grep -A3 SUP_DB /etc/scality/supapi.yaml |grep password |awk '{print $2}'
psql -U supapi
\dt
table server;
table server_ip;
UPDATE server SET management_ip = '10.98.0.8' WHERE id = 19;
UPDATE server_ip SET address = '10.98.0.8' WHERE id = 17;
```
### ElasticSearch status :
`curl -Ls http://127.0.0.1:4443/api/v0.1/es_proxy/_cluster/health?pretty`
## S3C : Change topology
- Edit the inventory with the new IP :
```
cd /srv/scality/s3/s3-offline/federation
vim env/s3config/inventory
```
- Replace the IP on `group_vars/all`
```
vim env/s3config/group_vars/all
```
We have to advertise first the OTHER SERVERS of the ip change.
Example we are changing the ip on md1-cluster1
We will redeploy the other servers with the new topology
```bash
cd /srv/scality/s3/s3-offline/federation
./ansible-playbook -i env/s3config/inventory run.yml -t s3,DR -l md2-cluster1 --skip-tags "requirements,run::images,cleanup" -e "redis_ip_check=False"
./ansible-playbook -i env/s3config/inventory run.yml -t s3,DR -l md3-cluster1 --skip-tags "requirements,run::images,cleanup" -e "redis_ip_check=False"
./ansible-playbook -i env/s3config/inventory run.yml -t s3,DR -l md4-cluster1 --skip-tags "requirements,run::images,cleanup" -e "redis_ip_check=False"
./ansible-playbook -i env/s3config/inventory run.yml -t s3,DR -l md5-cluster1 --skip-tags "requirements,run::images,cleanup" -e "redis_ip_check=False"
./ansible-playbook -i env/s3config/inventory run.yml -t s3,DR -l stateless2 --skip-tags "requirements,run::images,cleanup" -e "redis_ip_check=False"
./ansible-playbook -i env/s3config/inventory run.yml -t s3,DR -lstateless1 --skip-tags "requirements,run::images,cleanup" -e "redis_ip_check=False"
```
Note : Not sur the tag -t s3,DR will work due to bug with the S3C version.
If this is not the case we will run.yml without `-t`
Then when all the other servers are redeployed now redeploy S3 on the current server : (md1-cluster1)
```bash
./ansible-playbook -i env/s3config/inventory run.yml -l md1-cluster1 --skip-tags "cleanup,run::images" -e "redis_ip_check=False"
```
### Redis on S3C :
Redis on S3C does not like ip adresse change, check his status.
Check the Redis cluster
They are supposed to have all the same IP (same MASTER)
```
../repo/venv/bin/ansible -i env/s3config/inventory -m shell -a 'ctrctl exec redis-server redis-cli -p 16379 sentinel get-master-addr-by-name scality-s3' md[12345]-cluster1
```
```
../repo/venv/bin/ansible -i env/s3config/inventory -m shell -a 'ctrctl exec redis-server redis-cli info replication | grep -E "master_host|role"' md[12345]-cluster1
```

16
notes/meetings/freepro.md Normal file
View File

@ -0,0 +1,16 @@
---
title: Freepro
date: 08-11-2025
last_modified: 10-11-2025:18:12
tags:
- default
---
# Freepro
Commencez à écrire votre note ici...
Blablabla
/kfdkfdkfdk

View File

@ -0,0 +1,19 @@
---
title: Outscale
date: 08-11-2025
last_modified: 10-11-2025:18:10
tags:
- outscale
---
mfdmfd
# Outscale
# Titre de niveau 1
Commencez à écrire votre note ici...
bash
awk

View File

@ -0,0 +1,18 @@
---
title: Ma note.md
date: 08-11-2025
last_modified: 10-11-2025:18:08
tags:
- default
- recherche
- chat
---
# Nouvelle Note 1
Commencez à écrire votre note ici...
08/11/2025
## Un nouveau titre

View File

@ -0,0 +1,11 @@
---
title: Nouvelle Note 2
date: 08-11-2025
last_modified: 10-11-2025:18:14
tags:
- default
---
# Nouvelle Note 2
Ceci est un test de note

View File

@ -0,0 +1,63 @@
---
title: Silverbullet
date: 08-11-2025
last_modified: 09-11-2025:01:13
tags:
- ring
---
lsls
#### Server list :
```
C'est un morceau de code.
```
```bash
ringsh supervisor serverList
```
### Show config
Here you will find the `ring password` and `supapi db password`
```bash
ringsh-config show
```
#### Status :
```bash
for RING in $(ringsh supervisor ringList); do echo " #### $RING ####"; ringsh supervisor ringStorage $RING; ringsh supervisor ringStatus $RING; done
```
#### % Disks usage :
```bash
ringsh supervisor ringStatus DATA | egrep -i '^disk' | awk -F ' ' '{if ($6 + 0 !=0) print int( $5 * 100 / $6) "%" }`
for RING in $(ringsh supervisor ringList); do echo " #### $RING ####"; ringsh supervisor ringStatus $RING | egrep -i '^disk' | awk -F ' ' '{if ($6 + 0 !=0) print $3, "is", int( $5 * 100 / $6)"% full" }'; done
```
#### Purge Batch / Chuk Deleted :
```bash
for NODE in $(ringsh supervisor loadConf META | awk '{print $3}'); do echo " ### using node $NODE";ringsh -r META -u $NODE node dumpStats flags_01 ; done
for NODE in $(ringsh supervisor loadConf META | awk '{print $3}'); do echo " ### using node $NODE";ringsh -r DATA -u $NODE node purgeTask fullqueue=1 timetolive=0 absttl=0; done
```
#### Increase number of Batch Delete (1000)
```bash
for NODE in {1..6}; do ringsh -u DATA-storage01-n$NODE -r DATA node configSet msgstore_protocol_chord chordpurgemaxbatch 10000; done
```
#### Rebuild activity :
```bash
salt -G 'roles:ROLE_STORE' cmd.run "grep DELETE /var/log/scality-srebuildd.log-20211001 | cut -c 1-9 | uniq -c"
```