mirror of
https://github.com/dani-garcia/vaultwarden.git
synced 2026-03-23 10:19:21 -07:00
Compare commits
100 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
9e4d372213 | ||
|
|
d0bf0ab237 | ||
|
|
e327583aa5 | ||
|
|
ead2f02cbd | ||
|
|
c453528dc1 | ||
|
|
6ae48aa8c2 | ||
|
|
88643fd9d5 | ||
|
|
73e0002219 | ||
|
|
c49ee47de0 | ||
|
|
14408396bb | ||
|
|
6cbb724069 | ||
|
|
a2316ca091 | ||
|
|
c476e19796 | ||
|
|
9f393cfd9d | ||
|
|
450c4d4d97 | ||
|
|
75e62abed0 | ||
|
|
97f9eb1320 | ||
|
|
53cc8a65af | ||
|
|
f94ac6ca61 | ||
|
|
cee3fd5ba2 | ||
|
|
016fe2269e | ||
|
|
03c0a5e405 | ||
|
|
cbbed79036 | ||
|
|
4af81ec50e | ||
|
|
a5ba67fef2 | ||
|
|
4cebe1fff4 | ||
|
|
a984dbbdf3 | ||
|
|
881524bd54 | ||
|
|
44da9e6ca7 | ||
|
|
4c0c8f7432 | ||
|
|
f67854c59c | ||
|
|
a1c1b9ab3b | ||
|
|
395979e834 | ||
|
|
fce6cb5865 | ||
|
|
338756550a | ||
|
|
d014eede9a | ||
|
|
9930a0d752 | ||
|
|
9928a5404b | ||
|
|
a6e0ddcdf1 | ||
|
|
acab70ed89 | ||
|
|
c0d149060f | ||
|
|
344f00d9c9 | ||
|
|
b26afb970a | ||
|
|
34ed5ce4b3 | ||
|
|
9375d5b8c2 | ||
|
|
e3678b4b56 | ||
|
|
b4c95fb4ac | ||
|
|
0bb33e04bb | ||
|
|
4d33e24099 | ||
|
|
2cdce04662 | ||
|
|
756d108f6a | ||
|
|
ca20b3d80c | ||
|
|
4ab9362971 | ||
|
|
4e8828e41a | ||
|
|
f8d1cfad2a | ||
|
|
b0a411b733 | ||
|
|
81741647f3 | ||
|
|
f36bd72a7f | ||
|
|
8c10de3edd | ||
|
|
0ab10a7c43 | ||
|
|
a1a5e00ff5 | ||
|
|
8af4b593fa | ||
|
|
9bef2c120c | ||
|
|
f7d99c43b5 | ||
|
|
ca0fd7a31b | ||
|
|
9e1550af8e | ||
|
|
a99c9715f6 | ||
|
|
1a888b5355 | ||
|
|
10d5c7738a | ||
|
|
80f23e6d78 | ||
|
|
d5ed2ce6df | ||
|
|
5e649f0d0d | ||
|
|
612c0e9478 | ||
|
|
0d2b3bfb99 | ||
|
|
c934838ace | ||
|
|
4350e9d241 | ||
|
|
0cdc0cb147 | ||
|
|
20535065d7 | ||
|
|
a23f4a704b | ||
|
|
93f2f74767 | ||
|
|
37ca202247 | ||
|
|
37525b1e7e | ||
|
|
d594b5a266 | ||
|
|
41add45e67 | ||
|
|
08b168a0a1 | ||
|
|
978ef2bc8b | ||
|
|
881d1f4334 | ||
|
|
56b4f46d7d | ||
|
|
f6bd8b3462 | ||
|
|
1f0f64d961 | ||
|
|
42ba817a4c | ||
|
|
dd98fe860b | ||
|
|
1fe9f101be | ||
|
|
c68fbb41d2 | ||
|
|
91e80657e4 | ||
|
|
2db30f918e | ||
|
|
cfceac3909 | ||
|
|
58b046fd10 | ||
|
|
227779256c | ||
|
|
89b5f7c98d |
@@ -61,6 +61,10 @@
|
|||||||
## To control this on a per-org basis instead, use the "Disable Send" org policy.
|
## To control this on a per-org basis instead, use the "Disable Send" org policy.
|
||||||
# SENDS_ALLOWED=true
|
# SENDS_ALLOWED=true
|
||||||
|
|
||||||
|
## Controls whether users can enable emergency access to their accounts.
|
||||||
|
## This setting applies globally to all users.
|
||||||
|
# EMERGENCY_ACCESS_ALLOWED=true
|
||||||
|
|
||||||
## Job scheduler settings
|
## Job scheduler settings
|
||||||
##
|
##
|
||||||
## Job schedules use a cron-like syntax (as parsed by https://crates.io/crates/cron),
|
## Job schedules use a cron-like syntax (as parsed by https://crates.io/crates/cron),
|
||||||
@@ -77,6 +81,18 @@
|
|||||||
## Cron schedule of the job that checks for trashed items to delete permanently.
|
## Cron schedule of the job that checks for trashed items to delete permanently.
|
||||||
## Defaults to daily (5 minutes after midnight). Set blank to disable this job.
|
## Defaults to daily (5 minutes after midnight). Set blank to disable this job.
|
||||||
# TRASH_PURGE_SCHEDULE="0 5 0 * * *"
|
# TRASH_PURGE_SCHEDULE="0 5 0 * * *"
|
||||||
|
##
|
||||||
|
## Cron schedule of the job that checks for incomplete 2FA logins.
|
||||||
|
## Defaults to once every minute. Set blank to disable this job.
|
||||||
|
# INCOMPLETE_2FA_SCHEDULE="30 * * * * *"
|
||||||
|
##
|
||||||
|
## Cron schedule of the job that sends expiration reminders to emergency access grantors.
|
||||||
|
## Defaults to hourly (5 minutes after the hour). Set blank to disable this job.
|
||||||
|
# EMERGENCY_NOTIFICATION_REMINDER_SCHEDULE="0 5 * * * *"
|
||||||
|
##
|
||||||
|
## Cron schedule of the job that grants emergency access requests that have met the required wait time.
|
||||||
|
## Defaults to hourly (5 minutes after the hour). Set blank to disable this job.
|
||||||
|
# EMERGENCY_REQUEST_TIMEOUT_SCHEDULE="0 5 * * * *"
|
||||||
|
|
||||||
## Enable extended logging, which shows timestamps and targets in the logs
|
## Enable extended logging, which shows timestamps and targets in the logs
|
||||||
# EXTENDED_LOGGING=true
|
# EXTENDED_LOGGING=true
|
||||||
@@ -208,6 +224,13 @@
|
|||||||
## This setting applies globally, so make sure to inform all users of any changes to this setting.
|
## This setting applies globally, so make sure to inform all users of any changes to this setting.
|
||||||
# TRASH_AUTO_DELETE_DAYS=
|
# TRASH_AUTO_DELETE_DAYS=
|
||||||
|
|
||||||
|
## Number of minutes to wait before a 2FA-enabled login is considered incomplete,
|
||||||
|
## resulting in an email notification. An incomplete 2FA login is one where the correct
|
||||||
|
## master password was provided but the required 2FA step was not completed, which
|
||||||
|
## potentially indicates a master password compromise. Set to 0 to disable this check.
|
||||||
|
## This setting applies globally to all users.
|
||||||
|
# INCOMPLETE_2FA_TIME_LIMIT=3
|
||||||
|
|
||||||
## Controls the PBBKDF password iterations to apply on the server
|
## Controls the PBBKDF password iterations to apply on the server
|
||||||
## The change only applies when the password is changed
|
## The change only applies when the password is changed
|
||||||
# PASSWORD_ITERATIONS=100000
|
# PASSWORD_ITERATIONS=100000
|
||||||
|
|||||||
90
.github/workflows/build.yml
vendored
90
.github/workflows/build.yml
vendored
@@ -2,36 +2,23 @@ name: Build
|
|||||||
|
|
||||||
on:
|
on:
|
||||||
push:
|
push:
|
||||||
paths-ignore:
|
paths:
|
||||||
- "*.md"
|
- ".github/workflows/build.yml"
|
||||||
- "*.txt"
|
- "src/**"
|
||||||
- ".dockerignore"
|
- "migrations/**"
|
||||||
- ".env.template"
|
- "Cargo.*"
|
||||||
- ".gitattributes"
|
- "build.rs"
|
||||||
- ".gitignore"
|
- "diesel.toml"
|
||||||
- "azure-pipelines.yml"
|
- "rust-toolchain"
|
||||||
- "docker/**"
|
|
||||||
- "hooks/**"
|
|
||||||
- "tools/**"
|
|
||||||
- ".github/FUNDING.yml"
|
|
||||||
- ".github/ISSUE_TEMPLATE/**"
|
|
||||||
- ".github/security-contact.gif"
|
|
||||||
pull_request:
|
pull_request:
|
||||||
# Ignore when there are only changes done too one of these paths
|
paths:
|
||||||
paths-ignore:
|
- ".github/workflows/build.yml"
|
||||||
- "*.md"
|
- "src/**"
|
||||||
- "*.txt"
|
- "migrations/**"
|
||||||
- ".dockerignore"
|
- "Cargo.*"
|
||||||
- ".env.template"
|
- "build.rs"
|
||||||
- ".gitattributes"
|
- "diesel.toml"
|
||||||
- ".gitignore"
|
- "rust-toolchain"
|
||||||
- "azure-pipelines.yml"
|
|
||||||
- "docker/**"
|
|
||||||
- "hooks/**"
|
|
||||||
- "tools/**"
|
|
||||||
- ".github/FUNDING.yml"
|
|
||||||
- ".github/ISSUE_TEMPLATE/**"
|
|
||||||
- ".github/security-contact.gif"
|
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
build:
|
build:
|
||||||
@@ -44,30 +31,22 @@ jobs:
|
|||||||
matrix:
|
matrix:
|
||||||
channel:
|
channel:
|
||||||
- nightly
|
- nightly
|
||||||
# - stable
|
|
||||||
target-triple:
|
target-triple:
|
||||||
- x86_64-unknown-linux-gnu
|
- x86_64-unknown-linux-gnu
|
||||||
# - x86_64-unknown-linux-musl
|
|
||||||
include:
|
include:
|
||||||
- target-triple: x86_64-unknown-linux-gnu
|
- target-triple: x86_64-unknown-linux-gnu
|
||||||
host-triple: x86_64-unknown-linux-gnu
|
host-triple: x86_64-unknown-linux-gnu
|
||||||
features: [sqlite,mysql,postgresql] # Remember to update the `cargo test` to match the amount of features
|
features: [sqlite,mysql,postgresql] # Remember to update the `cargo test` to match the amount of features
|
||||||
channel: nightly
|
channel: nightly
|
||||||
os: ubuntu-18.04
|
os: ubuntu-20.04
|
||||||
ext: ""
|
ext: ""
|
||||||
# - target-triple: x86_64-unknown-linux-gnu
|
|
||||||
# host-triple: x86_64-unknown-linux-gnu
|
|
||||||
# features: "sqlite,mysql,postgresql"
|
|
||||||
# channel: stable
|
|
||||||
# os: ubuntu-18.04
|
|
||||||
# ext: ""
|
|
||||||
|
|
||||||
name: Building ${{ matrix.channel }}-${{ matrix.target-triple }}
|
name: Building ${{ matrix.channel }}-${{ matrix.target-triple }}
|
||||||
runs-on: ${{ matrix.os }}
|
runs-on: ${{ matrix.os }}
|
||||||
steps:
|
steps:
|
||||||
# Checkout the repo
|
# Checkout the repo
|
||||||
- name: Checkout
|
- name: Checkout
|
||||||
uses: actions/checkout@v2
|
uses: actions/checkout@5a4ac9002d0be2fb38bd78e4b4dbde5606d7042f # v2.3.4
|
||||||
# End Checkout the repo
|
# End Checkout the repo
|
||||||
|
|
||||||
|
|
||||||
@@ -86,13 +65,13 @@ jobs:
|
|||||||
|
|
||||||
|
|
||||||
# Enable Rust Caching
|
# Enable Rust Caching
|
||||||
- uses: Swatinem/rust-cache@v1
|
- uses: Swatinem/rust-cache@842ef286fff290e445b90b4002cc9807c3669641 # v1.3.0
|
||||||
# End Enable Rust Caching
|
# End Enable Rust Caching
|
||||||
|
|
||||||
|
|
||||||
# Uses the rust-toolchain file to determine version
|
# Uses the rust-toolchain file to determine version
|
||||||
- name: 'Install ${{ matrix.channel }}-${{ matrix.host-triple }} for target: ${{ matrix.target-triple }}'
|
- name: 'Install ${{ matrix.channel }}-${{ matrix.host-triple }} for target: ${{ matrix.target-triple }}'
|
||||||
uses: actions-rs/toolchain@v1
|
uses: actions-rs/toolchain@b2417cde72dcf67f306c0ae8e0828a81bf0b189f # v1.0.6
|
||||||
with:
|
with:
|
||||||
profile: minimal
|
profile: minimal
|
||||||
target: ${{ matrix.target-triple }}
|
target: ${{ matrix.target-triple }}
|
||||||
@@ -103,28 +82,28 @@ jobs:
|
|||||||
# Run cargo tests (In release mode to speed up future builds)
|
# Run cargo tests (In release mode to speed up future builds)
|
||||||
# First test all features together, afterwards test them separately.
|
# First test all features together, afterwards test them separately.
|
||||||
- name: "`cargo test --release --features ${{ join(matrix.features, ',') }} --target ${{ matrix.target-triple }}`"
|
- name: "`cargo test --release --features ${{ join(matrix.features, ',') }} --target ${{ matrix.target-triple }}`"
|
||||||
uses: actions-rs/cargo@v1
|
uses: actions-rs/cargo@ae10961054e4aa8b4aa7dffede299aaf087aa33b # v1.0.1
|
||||||
with:
|
with:
|
||||||
command: test
|
command: test
|
||||||
args: --release --features ${{ join(matrix.features, ',') }} --target ${{ matrix.target-triple }}
|
args: --release --features ${{ join(matrix.features, ',') }} --target ${{ matrix.target-triple }}
|
||||||
# Test single features
|
# Test single features
|
||||||
# 0: sqlite
|
# 0: sqlite
|
||||||
- name: "`cargo test --release --features ${{ matrix.features[0] }} --target ${{ matrix.target-triple }}`"
|
- name: "`cargo test --release --features ${{ matrix.features[0] }} --target ${{ matrix.target-triple }}`"
|
||||||
uses: actions-rs/cargo@v1
|
uses: actions-rs/cargo@ae10961054e4aa8b4aa7dffede299aaf087aa33b # v1.0.1
|
||||||
with:
|
with:
|
||||||
command: test
|
command: test
|
||||||
args: --release --features ${{ matrix.features[0] }} --target ${{ matrix.target-triple }}
|
args: --release --features ${{ matrix.features[0] }} --target ${{ matrix.target-triple }}
|
||||||
if: ${{ matrix.features[0] != '' }}
|
if: ${{ matrix.features[0] != '' }}
|
||||||
# 1: mysql
|
# 1: mysql
|
||||||
- name: "`cargo test --release --features ${{ matrix.features[1] }} --target ${{ matrix.target-triple }}`"
|
- name: "`cargo test --release --features ${{ matrix.features[1] }} --target ${{ matrix.target-triple }}`"
|
||||||
uses: actions-rs/cargo@v1
|
uses: actions-rs/cargo@ae10961054e4aa8b4aa7dffede299aaf087aa33b # v1.0.1
|
||||||
with:
|
with:
|
||||||
command: test
|
command: test
|
||||||
args: --release --features ${{ matrix.features[1] }} --target ${{ matrix.target-triple }}
|
args: --release --features ${{ matrix.features[1] }} --target ${{ matrix.target-triple }}
|
||||||
if: ${{ matrix.features[1] != '' }}
|
if: ${{ matrix.features[1] != '' }}
|
||||||
# 2: postgresql
|
# 2: postgresql
|
||||||
- name: "`cargo test --release --features ${{ matrix.features[2] }} --target ${{ matrix.target-triple }}`"
|
- name: "`cargo test --release --features ${{ matrix.features[2] }} --target ${{ matrix.target-triple }}`"
|
||||||
uses: actions-rs/cargo@v1
|
uses: actions-rs/cargo@ae10961054e4aa8b4aa7dffede299aaf087aa33b # v1.0.1
|
||||||
with:
|
with:
|
||||||
command: test
|
command: test
|
||||||
args: --release --features ${{ matrix.features[2] }} --target ${{ matrix.target-triple }}
|
args: --release --features ${{ matrix.features[2] }} --target ${{ matrix.target-triple }}
|
||||||
@@ -134,7 +113,7 @@ jobs:
|
|||||||
|
|
||||||
# Run cargo clippy, and fail on warnings (In release mode to speed up future builds)
|
# Run cargo clippy, and fail on warnings (In release mode to speed up future builds)
|
||||||
- name: "`cargo clippy --release --features ${{ join(matrix.features, ',') }} --target ${{ matrix.target-triple }}`"
|
- name: "`cargo clippy --release --features ${{ join(matrix.features, ',') }} --target ${{ matrix.target-triple }}`"
|
||||||
uses: actions-rs/cargo@v1
|
uses: actions-rs/cargo@ae10961054e4aa8b4aa7dffede299aaf087aa33b # v1.0.1
|
||||||
with:
|
with:
|
||||||
command: clippy
|
command: clippy
|
||||||
args: --release --features ${{ join(matrix.features, ',') }} --target ${{ matrix.target-triple }} -- -D warnings
|
args: --release --features ${{ join(matrix.features, ',') }} --target ${{ matrix.target-triple }} -- -D warnings
|
||||||
@@ -143,7 +122,7 @@ jobs:
|
|||||||
|
|
||||||
# Run cargo fmt
|
# Run cargo fmt
|
||||||
- name: '`cargo fmt`'
|
- name: '`cargo fmt`'
|
||||||
uses: actions-rs/cargo@v1
|
uses: actions-rs/cargo@ae10961054e4aa8b4aa7dffede299aaf087aa33b # v1.0.1
|
||||||
with:
|
with:
|
||||||
command: fmt
|
command: fmt
|
||||||
args: --all -- --check
|
args: --all -- --check
|
||||||
@@ -152,7 +131,7 @@ jobs:
|
|||||||
|
|
||||||
# Build the binary
|
# Build the binary
|
||||||
- name: "`cargo build --release --features ${{ join(matrix.features, ',') }} --target ${{ matrix.target-triple }}`"
|
- name: "`cargo build --release --features ${{ join(matrix.features, ',') }} --target ${{ matrix.target-triple }}`"
|
||||||
uses: actions-rs/cargo@v1
|
uses: actions-rs/cargo@ae10961054e4aa8b4aa7dffede299aaf087aa33b # v1.0.1
|
||||||
with:
|
with:
|
||||||
command: build
|
command: build
|
||||||
args: --release --features ${{ join(matrix.features, ',') }} --target ${{ matrix.target-triple }}
|
args: --release --features ${{ join(matrix.features, ',') }} --target ${{ matrix.target-triple }}
|
||||||
@@ -161,21 +140,8 @@ jobs:
|
|||||||
|
|
||||||
# Upload artifact to Github Actions
|
# Upload artifact to Github Actions
|
||||||
- name: Upload artifact
|
- name: Upload artifact
|
||||||
uses: actions/upload-artifact@v2
|
uses: actions/upload-artifact@27121b0bdffd731efa15d66772be8dc71245d074 # v2.2.4
|
||||||
with:
|
with:
|
||||||
name: vaultwarden-${{ matrix.target-triple }}${{ matrix.ext }}
|
name: vaultwarden-${{ matrix.target-triple }}${{ matrix.ext }}
|
||||||
path: target/${{ matrix.target-triple }}/release/vaultwarden${{ matrix.ext }}
|
path: target/${{ matrix.target-triple }}/release/vaultwarden${{ matrix.ext }}
|
||||||
# End Upload artifact to Github Actions
|
# End Upload artifact to Github Actions
|
||||||
|
|
||||||
|
|
||||||
## This is not used at the moment
|
|
||||||
## We could start using this when we can build static binaries
|
|
||||||
# Upload to github actions release
|
|
||||||
# - name: Release
|
|
||||||
# uses: Shopify/upload-to-release@1
|
|
||||||
# if: startsWith(github.ref, 'refs/tags/')
|
|
||||||
# with:
|
|
||||||
# name: vaultwarden-${{ matrix.target-triple }}${{ matrix.ext }}
|
|
||||||
# path: target/${{ matrix.target-triple }}/release/vaultwarden${{ matrix.ext }}
|
|
||||||
# repo-token: ${{ secrets.GITHUB_TOKEN }}
|
|
||||||
# End Upload to github actions release
|
|
||||||
|
|||||||
7
.github/workflows/hadolint.yml
vendored
7
.github/workflows/hadolint.yml
vendored
@@ -2,11 +2,10 @@ name: Hadolint
|
|||||||
|
|
||||||
on:
|
on:
|
||||||
push:
|
push:
|
||||||
# Ignore when there are only changes done too one of these paths
|
|
||||||
paths:
|
paths:
|
||||||
- "docker/**"
|
- "docker/**"
|
||||||
|
|
||||||
pull_request:
|
pull_request:
|
||||||
# Ignore when there are only changes done too one of these paths
|
|
||||||
paths:
|
paths:
|
||||||
- "docker/**"
|
- "docker/**"
|
||||||
|
|
||||||
@@ -17,7 +16,7 @@ jobs:
|
|||||||
steps:
|
steps:
|
||||||
# Checkout the repo
|
# Checkout the repo
|
||||||
- name: Checkout
|
- name: Checkout
|
||||||
uses: actions/checkout@v2
|
uses: actions/checkout@5a4ac9002d0be2fb38bd78e4b4dbde5606d7042f # v2.3.4
|
||||||
# End Checkout the repo
|
# End Checkout the repo
|
||||||
|
|
||||||
|
|
||||||
@@ -28,7 +27,7 @@ jobs:
|
|||||||
sudo curl -L https://github.com/hadolint/hadolint/releases/download/v${HADOLINT_VERSION}/hadolint-$(uname -s)-$(uname -m) -o /usr/local/bin/hadolint && \
|
sudo curl -L https://github.com/hadolint/hadolint/releases/download/v${HADOLINT_VERSION}/hadolint-$(uname -s)-$(uname -m) -o /usr/local/bin/hadolint && \
|
||||||
sudo chmod +x /usr/local/bin/hadolint
|
sudo chmod +x /usr/local/bin/hadolint
|
||||||
env:
|
env:
|
||||||
HADOLINT_VERSION: 2.5.0
|
HADOLINT_VERSION: 2.7.0
|
||||||
# End Download hadolint
|
# End Download hadolint
|
||||||
|
|
||||||
# Test Dockerfiles
|
# Test Dockerfiles
|
||||||
|
|||||||
119
.github/workflows/release.yml
vendored
Normal file
119
.github/workflows/release.yml
vendored
Normal file
@@ -0,0 +1,119 @@
|
|||||||
|
name: Release
|
||||||
|
|
||||||
|
on:
|
||||||
|
push:
|
||||||
|
paths:
|
||||||
|
- ".github/workflows/release.yml"
|
||||||
|
- "src/**"
|
||||||
|
- "migrations/**"
|
||||||
|
- "hooks/**"
|
||||||
|
- "docker/**"
|
||||||
|
- "Cargo.*"
|
||||||
|
- "build.rs"
|
||||||
|
- "diesel.toml"
|
||||||
|
- "rust-toolchain"
|
||||||
|
|
||||||
|
branches: # Only on paths above
|
||||||
|
- main
|
||||||
|
|
||||||
|
tags: # Always, regardless of paths above
|
||||||
|
- '*'
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
# https://github.com/marketplace/actions/skip-duplicate-actions
|
||||||
|
# Some checks to determine if we need to continue with building a new docker.
|
||||||
|
# We will skip this check if we are creating a tag, because that has the same hash as a previous run already.
|
||||||
|
skip_check:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
if: ${{ github.repository == 'dani-garcia/vaultwarden' }}
|
||||||
|
outputs:
|
||||||
|
should_skip: ${{ steps.skip_check.outputs.should_skip }}
|
||||||
|
steps:
|
||||||
|
- name: Skip Duplicates Actions
|
||||||
|
id: skip_check
|
||||||
|
uses: fkirc/skip-duplicate-actions@f75dd6564bb646f95277dc8c3b80612e46a4a1ea # v3.4.1
|
||||||
|
with:
|
||||||
|
cancel_others: 'true'
|
||||||
|
# Only run this when not creating a tag
|
||||||
|
if: ${{ startsWith(github.ref, 'refs/heads/') }}
|
||||||
|
|
||||||
|
docker-build:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
needs: skip_check
|
||||||
|
# Start a local docker registry to be used to generate multi-arch images.
|
||||||
|
services:
|
||||||
|
registry:
|
||||||
|
image: registry:2
|
||||||
|
ports:
|
||||||
|
- 5000:5000
|
||||||
|
env:
|
||||||
|
DOCKER_BUILDKIT: 1 # Disabled for now, but we should look at this because it will speedup building!
|
||||||
|
# DOCKER_REPO/secrets.DOCKERHUB_REPO needs to be 'index.docker.io/<user>/<repo>'
|
||||||
|
DOCKER_REPO: ${{ secrets.DOCKERHUB_REPO }}
|
||||||
|
SOURCE_COMMIT: ${{ github.sha }}
|
||||||
|
SOURCE_REPOSITORY_URL: "https://github.com/${{ github.repository }}"
|
||||||
|
if: ${{ needs.skip_check.outputs.should_skip != 'true' && github.repository == 'dani-garcia/vaultwarden' }}
|
||||||
|
strategy:
|
||||||
|
matrix:
|
||||||
|
base_image: ["debian","alpine"]
|
||||||
|
|
||||||
|
steps:
|
||||||
|
# Checkout the repo
|
||||||
|
- name: Checkout
|
||||||
|
uses: actions/checkout@5a4ac9002d0be2fb38bd78e4b4dbde5606d7042f # v2.3.4
|
||||||
|
with:
|
||||||
|
fetch-depth: 0
|
||||||
|
|
||||||
|
# Login to Docker Hub
|
||||||
|
- name: Login to Docker Hub
|
||||||
|
uses: docker/login-action@f054a8b539a109f9f41c372932f1ae047eff08c9 # v1.10.0
|
||||||
|
with:
|
||||||
|
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
||||||
|
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
||||||
|
|
||||||
|
# Determine Docker Tag
|
||||||
|
- name: Init Variables
|
||||||
|
id: vars
|
||||||
|
shell: bash
|
||||||
|
run: |
|
||||||
|
# Check which main tag we are going to build determined by github.ref
|
||||||
|
if [[ "${{ github.ref }}" == refs/tags/* ]]; then
|
||||||
|
echo "set-output name=DOCKER_TAG::${GITHUB_REF#refs/*/}"
|
||||||
|
echo "::set-output name=DOCKER_TAG::${GITHUB_REF#refs/*/}"
|
||||||
|
elif [[ "${{ github.ref }}" == refs/heads/* ]]; then
|
||||||
|
echo "set-output name=DOCKER_TAG::testing"
|
||||||
|
echo "::set-output name=DOCKER_TAG::testing"
|
||||||
|
fi
|
||||||
|
# End Determine Docker Tag
|
||||||
|
|
||||||
|
- name: Build Debian based images
|
||||||
|
shell: bash
|
||||||
|
env:
|
||||||
|
DOCKER_TAG: "${{steps.vars.outputs.DOCKER_TAG}}"
|
||||||
|
run: |
|
||||||
|
./hooks/build
|
||||||
|
if: ${{ matrix.base_image == 'debian' }}
|
||||||
|
|
||||||
|
- name: Push Debian based images
|
||||||
|
shell: bash
|
||||||
|
env:
|
||||||
|
DOCKER_TAG: "${{steps.vars.outputs.DOCKER_TAG}}"
|
||||||
|
run: |
|
||||||
|
./hooks/push
|
||||||
|
if: ${{ matrix.base_image == 'debian' }}
|
||||||
|
|
||||||
|
- name: Build Alpine based images
|
||||||
|
shell: bash
|
||||||
|
env:
|
||||||
|
DOCKER_TAG: "${{steps.vars.outputs.DOCKER_TAG}}-alpine"
|
||||||
|
run: |
|
||||||
|
./hooks/build
|
||||||
|
if: ${{ matrix.base_image == 'alpine' }}
|
||||||
|
|
||||||
|
- name: Push Alpine based images
|
||||||
|
shell: bash
|
||||||
|
env:
|
||||||
|
DOCKER_TAG: "${{steps.vars.outputs.DOCKER_TAG}}-alpine"
|
||||||
|
run: |
|
||||||
|
./hooks/push
|
||||||
|
if: ${{ matrix.base_image == 'alpine' }}
|
||||||
@@ -7,6 +7,7 @@ repos:
|
|||||||
- id: check-json
|
- id: check-json
|
||||||
- id: check-toml
|
- id: check-toml
|
||||||
- id: end-of-file-fixer
|
- id: end-of-file-fixer
|
||||||
|
exclude: "(.*js$|.*css$)"
|
||||||
- id: check-case-conflict
|
- id: check-case-conflict
|
||||||
- id: check-merge-conflict
|
- id: check-merge-conflict
|
||||||
- id: detect-private-key
|
- id: detect-private-key
|
||||||
|
|||||||
1020
Cargo.lock
generated
1020
Cargo.lock
generated
File diff suppressed because it is too large
Load Diff
45
Cargo.toml
45
Cargo.toml
@@ -2,7 +2,9 @@
|
|||||||
name = "vaultwarden"
|
name = "vaultwarden"
|
||||||
version = "1.0.0"
|
version = "1.0.0"
|
||||||
authors = ["Daniel García <dani-garcia@users.noreply.github.com>"]
|
authors = ["Daniel García <dani-garcia@users.noreply.github.com>"]
|
||||||
edition = "2018"
|
edition = "2021"
|
||||||
|
rust-version = "1.57"
|
||||||
|
resolver = "2"
|
||||||
|
|
||||||
repository = "https://github.com/dani-garcia/vaultwarden"
|
repository = "https://github.com/dani-garcia/vaultwarden"
|
||||||
readme = "README.md"
|
readme = "README.md"
|
||||||
@@ -32,36 +34,36 @@ rocket = { version = "=0.5.0-dev", features = ["tls"], default-features = false
|
|||||||
rocket_contrib = "=0.5.0-dev"
|
rocket_contrib = "=0.5.0-dev"
|
||||||
|
|
||||||
# HTTP client
|
# HTTP client
|
||||||
reqwest = { version = "0.11.4", features = ["blocking", "json", "gzip", "brotli", "socks", "cookies"] }
|
reqwest = { version = "0.11.7", features = ["blocking", "json", "gzip", "brotli", "socks", "cookies", "trust-dns"] }
|
||||||
|
|
||||||
# Used for custom short lived cookie jar
|
# Used for custom short lived cookie jar
|
||||||
cookie = "0.15.1"
|
cookie = "0.15.1"
|
||||||
cookie_store = "0.15.0"
|
cookie_store = "0.15.1"
|
||||||
bytes = "1.0.1"
|
bytes = "1.1.0"
|
||||||
url = "2.2.2"
|
url = "2.2.2"
|
||||||
|
|
||||||
# multipart/form-data support
|
# multipart/form-data support
|
||||||
multipart = { version = "0.18.0", features = ["server"], default-features = false }
|
multipart = { version = "0.18.0", features = ["server"], default-features = false }
|
||||||
|
|
||||||
# WebSockets library
|
# WebSockets library
|
||||||
ws = { version = "0.11.0", package = "parity-ws" }
|
ws = { version = "0.11.1", package = "parity-ws" }
|
||||||
|
|
||||||
# MessagePack library
|
# MessagePack library
|
||||||
rmpv = "0.4.7"
|
rmpv = "1.0.0"
|
||||||
|
|
||||||
# Concurrent hashmap implementation
|
# Concurrent hashmap implementation
|
||||||
chashmap = "2.2.2"
|
chashmap = "2.2.2"
|
||||||
|
|
||||||
# A generic serialization/deserialization framework
|
# A generic serialization/deserialization framework
|
||||||
serde = { version = "1.0.126", features = ["derive"] }
|
serde = { version = "1.0.130", features = ["derive"] }
|
||||||
serde_json = "1.0.64"
|
serde_json = "1.0.72"
|
||||||
|
|
||||||
# Logging
|
# Logging
|
||||||
log = "0.4.14"
|
log = "0.4.14"
|
||||||
fern = { version = "0.6.0", features = ["syslog-4"] }
|
fern = { version = "0.6.0", features = ["syslog-4"] }
|
||||||
|
|
||||||
# A safe, extensible ORM and Query builder
|
# A safe, extensible ORM and Query builder
|
||||||
diesel = { version = "1.4.7", features = [ "chrono", "r2d2"] }
|
diesel = { version = "1.4.8", features = [ "chrono", "r2d2"] }
|
||||||
diesel_migrations = "1.4.0"
|
diesel_migrations = "1.4.0"
|
||||||
|
|
||||||
# Bundled SQLite
|
# Bundled SQLite
|
||||||
@@ -76,14 +78,14 @@ uuid = { version = "0.8.2", features = ["v4"] }
|
|||||||
|
|
||||||
# Date and time libraries
|
# Date and time libraries
|
||||||
chrono = { version = "0.4.19", features = ["serde"] }
|
chrono = { version = "0.4.19", features = ["serde"] }
|
||||||
chrono-tz = "0.5.3"
|
chrono-tz = "0.6.0"
|
||||||
time = "0.2.27"
|
time = "0.2.27"
|
||||||
|
|
||||||
# Job scheduler
|
# Job scheduler
|
||||||
job_scheduler = "1.2.1"
|
job_scheduler = "1.2.1"
|
||||||
|
|
||||||
# TOTP library
|
# TOTP library
|
||||||
oath = "0.10.2"
|
totp-lite = "1.0.3"
|
||||||
|
|
||||||
# Data encoding library
|
# Data encoding library
|
||||||
data-encoding = "2.3.2"
|
data-encoding = "2.3.2"
|
||||||
@@ -93,7 +95,7 @@ jsonwebtoken = "7.2.0"
|
|||||||
|
|
||||||
# U2F library
|
# U2F library
|
||||||
u2f = "0.2.0"
|
u2f = "0.2.0"
|
||||||
webauthn-rs = "=0.3.0-alpha.9"
|
webauthn-rs = "0.3.0"
|
||||||
|
|
||||||
# Yubico Library
|
# Yubico Library
|
||||||
yubico = { version = "0.10.0", features = ["online-tokio"], default-features = false }
|
yubico = { version = "0.10.0", features = ["online-tokio"], default-features = false }
|
||||||
@@ -109,20 +111,20 @@ num-traits = "0.2.14"
|
|||||||
num-derive = "0.3.3"
|
num-derive = "0.3.3"
|
||||||
|
|
||||||
# Email libraries
|
# Email libraries
|
||||||
tracing = { version = "0.1.26", features = ["log"] } # Needed to have lettre trace logging used when SMTP_DEBUG is enabled.
|
tracing = { version = "0.1.29", features = ["log"] } # Needed to have lettre trace logging used when SMTP_DEBUG is enabled.
|
||||||
lettre = { version = "0.10.0-rc.3", features = ["smtp-transport", "builder", "serde", "native-tls", "hostname", "tracing"], default-features = false }
|
lettre = { version = "0.10.0-rc.4", features = ["smtp-transport", "builder", "serde", "native-tls", "hostname", "tracing"], default-features = false }
|
||||||
|
|
||||||
# Template library
|
# Template library
|
||||||
handlebars = { version = "4.1.0", features = ["dir_source"] }
|
handlebars = { version = "4.1.5", features = ["dir_source"] }
|
||||||
|
|
||||||
# For favicon extraction from main website
|
# For favicon extraction from main website
|
||||||
html5ever = "0.25.1"
|
html5ever = "0.25.1"
|
||||||
markup5ever_rcdom = "0.1.0"
|
markup5ever_rcdom = "0.1.0"
|
||||||
regex = { version = "1.5.4", features = ["std", "perf"], default-features = false }
|
regex = { version = "1.5.4", features = ["std", "perf", "unicode-perl"], default-features = false }
|
||||||
data-url = "0.1.0"
|
data-url = "0.1.1"
|
||||||
|
|
||||||
# Used by U2F, JWT and Postgres
|
# Used by U2F, JWT and Postgres
|
||||||
openssl = "0.10.35"
|
openssl = "0.10.38"
|
||||||
|
|
||||||
# URL encoding library
|
# URL encoding library
|
||||||
percent-encoding = "2.1.0"
|
percent-encoding = "2.1.0"
|
||||||
@@ -133,19 +135,16 @@ idna = "0.2.3"
|
|||||||
pico-args = "0.4.2"
|
pico-args = "0.4.2"
|
||||||
|
|
||||||
# Logging panics to logfile instead stderr only
|
# Logging panics to logfile instead stderr only
|
||||||
backtrace = "0.3.60"
|
backtrace = "0.3.63"
|
||||||
|
|
||||||
# Macro ident concatenation
|
# Macro ident concatenation
|
||||||
paste = "1.0.5"
|
paste = "1.0.6"
|
||||||
|
|
||||||
[patch.crates-io]
|
[patch.crates-io]
|
||||||
# Use newest ring
|
# Use newest ring
|
||||||
rocket = { git = 'https://github.com/SergioBenitez/Rocket', rev = '263e39b5b429de1913ce7e3036575a7b4d88b6d7' }
|
rocket = { git = 'https://github.com/SergioBenitez/Rocket', rev = '263e39b5b429de1913ce7e3036575a7b4d88b6d7' }
|
||||||
rocket_contrib = { git = 'https://github.com/SergioBenitez/Rocket', rev = '263e39b5b429de1913ce7e3036575a7b4d88b6d7' }
|
rocket_contrib = { git = 'https://github.com/SergioBenitez/Rocket', rev = '263e39b5b429de1913ce7e3036575a7b4d88b6d7' }
|
||||||
|
|
||||||
# For favicon extraction from main website
|
|
||||||
data-url = { git = 'https://github.com/servo/rust-url', package="data-url", rev = 'eb7330b5296c0d43816d1346211b74182bb4ae37' }
|
|
||||||
|
|
||||||
# The maintainer of the `job_scheduler` crate doesn't seem to have responded
|
# The maintainer of the `job_scheduler` crate doesn't seem to have responded
|
||||||
# to any issues or PRs for almost a year (as of April 2021). This hopefully
|
# to any issues or PRs for almost a year (as of April 2021). This hopefully
|
||||||
# temporary fork updates Cargo.toml to use more up-to-date dependencies.
|
# temporary fork updates Cargo.toml to use more up-to-date dependencies.
|
||||||
|
|||||||
@@ -1,4 +1,4 @@
|
|||||||
### Alternative implementation of the Bitwarden server API written in Rust and compatible with [upstream Bitwarden clients](https://bitwarden.com/#download)*, perfect for self-hosted deployment where running the official resource-heavy service might not be ideal.
|
### Alternative implementation of the Bitwarden server API written in Rust and compatible with [upstream Bitwarden clients](https://bitwarden.com/download/)*, perfect for self-hosted deployment where running the official resource-heavy service might not be ideal.
|
||||||
|
|
||||||
📢 Note: This project was known as Bitwarden_RS and has been renamed to separate itself from the official Bitwarden server in the hopes of avoiding confusion and trademark/branding issues. Please see [#1642](https://github.com/dani-garcia/vaultwarden/discussions/1642) for more explanation.
|
📢 Note: This project was known as Bitwarden_RS and has been renamed to separate itself from the official Bitwarden server in the hopes of avoiding confusion and trademark/branding issues. Please see [#1642](https://github.com/dani-garcia/vaultwarden/discussions/1642) for more explanation.
|
||||||
|
|
||||||
|
|||||||
@@ -1,10 +1,12 @@
|
|||||||
|
# syntax=docker/dockerfile:1
|
||||||
|
|
||||||
# This file was generated using a Jinja2 template.
|
# This file was generated using a Jinja2 template.
|
||||||
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
|
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
|
||||||
|
|
||||||
{% set build_stage_base_image = "rust:1.53" %}
|
{% set build_stage_base_image = "rust:1.55-buster" %}
|
||||||
{% if "alpine" in target_file %}
|
{% if "alpine" in target_file %}
|
||||||
{% if "amd64" in target_file %}
|
{% if "amd64" in target_file %}
|
||||||
{% set build_stage_base_image = "clux/muslrust:nightly-2021-06-24" %}
|
{% set build_stage_base_image = "clux/muslrust:nightly-2021-10-23" %}
|
||||||
{% set runtime_stage_base_image = "alpine:3.14" %}
|
{% set runtime_stage_base_image = "alpine:3.14" %}
|
||||||
{% set package_arch_target = "x86_64-unknown-linux-musl" %}
|
{% set package_arch_target = "x86_64-unknown-linux-musl" %}
|
||||||
{% elif "armv7" in target_file %}
|
{% elif "armv7" in target_file %}
|
||||||
@@ -40,12 +42,17 @@
|
|||||||
{% else %}
|
{% else %}
|
||||||
{% set package_arch_target_param = "" %}
|
{% set package_arch_target_param = "" %}
|
||||||
{% endif %}
|
{% endif %}
|
||||||
|
{% if "buildx" in target_file %}
|
||||||
|
{% set mount_rust_cache = "--mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry " %}
|
||||||
|
{% else %}
|
||||||
|
{% set mount_rust_cache = "" %}
|
||||||
|
{% endif %}
|
||||||
# Using multistage build:
|
# Using multistage build:
|
||||||
# https://docs.docker.com/develop/develop-images/multistage-build/
|
# https://docs.docker.com/develop/develop-images/multistage-build/
|
||||||
# https://whitfin.io/speeding-up-rust-docker-builds/
|
# https://whitfin.io/speeding-up-rust-docker-builds/
|
||||||
####################### VAULT BUILD IMAGE #######################
|
####################### VAULT BUILD IMAGE #######################
|
||||||
{% set vault_version = "2.21.1" %}
|
{% set vault_version = "2.25.0" %}
|
||||||
{% set vault_image_digest = "sha256:29a4fa7bf3790fff9d908b02ac5a154913491f4bf30c95b87b06d8cf1c5516b5" %}
|
{% set vault_image_digest = "sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527" %}
|
||||||
# The web-vault digest specifies a particular web-vault build on Docker Hub.
|
# The web-vault digest specifies a particular web-vault build on Docker Hub.
|
||||||
# Using the digest instead of the tag name provides better security,
|
# Using the digest instead of the tag name provides better security,
|
||||||
# as the digest of an image is immutable, whereas a tag name can later
|
# as the digest of an image is immutable, whereas a tag name can later
|
||||||
@@ -86,22 +93,40 @@ ARG DB=sqlite,mysql,postgresql
|
|||||||
{% endif %}
|
{% endif %}
|
||||||
|
|
||||||
# Build time options to avoid dpkg warnings and help with reproducible builds.
|
# Build time options to avoid dpkg warnings and help with reproducible builds.
|
||||||
ENV DEBIAN_FRONTEND=noninteractive LANG=C.UTF-8 TZ=UTC TERM=xterm-256color
|
ENV DEBIAN_FRONTEND=noninteractive \
|
||||||
|
LANG=C.UTF-8 \
|
||||||
|
TZ=UTC \
|
||||||
|
TERM=xterm-256color \
|
||||||
|
CARGO_HOME="/root/.cargo" \
|
||||||
|
USER="root"
|
||||||
|
|
||||||
# Don't download rust docs
|
{# {% if "alpine" not in target_file and "buildx" in target_file %}
|
||||||
RUN rustup set profile minimal
|
# Debian based Buildx builds can use some special apt caching to speedup building.
|
||||||
|
# By default Debian based images have some rules to keep docker builds clean, we need to remove this.
|
||||||
|
# See: https://hub.docker.com/r/docker/dockerfile
|
||||||
|
RUN rm -f /etc/apt/apt.conf.d/docker-clean; echo 'Binary::apt::APT::Keep-Downloaded-Packages "true";' > /etc/apt/apt.conf.d/keep-cache
|
||||||
|
{% endif %} #}
|
||||||
|
|
||||||
|
# Create CARGO_HOME folder and don't download rust docs
|
||||||
|
RUN {{ mount_rust_cache -}} mkdir -pv "${CARGO_HOME}" \
|
||||||
|
&& rustup set profile minimal
|
||||||
|
|
||||||
{% if "alpine" in target_file %}
|
{% if "alpine" in target_file %}
|
||||||
ENV USER "root"
|
|
||||||
ENV RUSTFLAGS='-C link-arg=-s'
|
ENV RUSTFLAGS='-C link-arg=-s'
|
||||||
{% if "armv7" in target_file %}
|
{% if "armv7" in target_file %}
|
||||||
|
{#- https://gcc.gnu.org/onlinedocs/gcc/ARM-Options.html -#}
|
||||||
ENV CFLAGS_armv7_unknown_linux_musleabihf="-mfpu=vfpv3-d16"
|
ENV CFLAGS_armv7_unknown_linux_musleabihf="-mfpu=vfpv3-d16"
|
||||||
{% endif %}
|
{% endif %}
|
||||||
{% elif "arm" in target_file %}
|
{% elif "arm" in target_file %}
|
||||||
|
# NOTE: Any apt-get/dpkg after this stage will fail because of broken dependencies.
|
||||||
|
# For Diesel-RS migrations_macros to compile with MySQL/MariaDB we need to do some magic.
|
||||||
|
# We at least need libmariadb3:amd64 installed for the x86_64 version of libmariadb.so (client)
|
||||||
|
# We also need the libmariadb-dev-compat:amd64 but it can not be installed together with the {{ package_arch_prefix }} version.
|
||||||
|
# What we can do is a force install, because nothing important is overlapping each other.
|
||||||
|
#
|
||||||
# Install required build libs for {{ package_arch_name }} architecture.
|
# Install required build libs for {{ package_arch_name }} architecture.
|
||||||
# To compile both mysql and postgresql we need some extra packages for both host arch and target arch
|
# To compile both mysql and postgresql we need some extra packages for both host arch and target arch
|
||||||
RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > \
|
RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > /etc/apt/sources.list.d/deb-src.list \
|
||||||
/etc/apt/sources.list.d/deb-src.list \
|
|
||||||
&& dpkg --add-architecture {{ package_arch_name }} \
|
&& dpkg --add-architecture {{ package_arch_name }} \
|
||||||
&& apt-get update \
|
&& apt-get update \
|
||||||
&& apt-get install -y \
|
&& apt-get install -y \
|
||||||
@@ -110,24 +135,43 @@ RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > \
|
|||||||
libc6-dev{{ package_arch_prefix }} \
|
libc6-dev{{ package_arch_prefix }} \
|
||||||
libpq5{{ package_arch_prefix }} \
|
libpq5{{ package_arch_prefix }} \
|
||||||
libpq-dev \
|
libpq-dev \
|
||||||
|
libmariadb3:amd64 \
|
||||||
libmariadb-dev{{ package_arch_prefix }} \
|
libmariadb-dev{{ package_arch_prefix }} \
|
||||||
libmariadb-dev-compat{{ package_arch_prefix }} \
|
libmariadb-dev-compat{{ package_arch_prefix }} \
|
||||||
gcc-{{ package_cross_compiler }} \
|
gcc-{{ package_cross_compiler }} \
|
||||||
&& mkdir -p ~/.cargo \
|
#
|
||||||
&& echo '[target.{{ package_arch_target }}]' >> ~/.cargo/config \
|
# Manual install libmariadb-dev-compat:amd64 ( After this broken dependencies will break apt )
|
||||||
&& echo 'linker = "{{ package_cross_compiler }}-gcc"' >> ~/.cargo/config \
|
&& apt-get download libmariadb-dev-compat:amd64 \
|
||||||
&& echo 'rustflags = ["-L/usr/lib/{{ package_cross_compiler }}"]' >> ~/.cargo/config
|
&& dpkg --force-all -i ./libmariadb-dev-compat*.deb \
|
||||||
|
&& rm -rvf ./libmariadb-dev-compat*.deb \
|
||||||
|
&& apt-get clean \
|
||||||
|
&& rm -rf /var/lib/apt/lists/* \
|
||||||
|
#
|
||||||
|
# For Diesel-RS migrations_macros to compile with PostgreSQL we need to do some magic.
|
||||||
|
# The libpq5{{ package_arch_prefix }} package seems to not provide a symlink to libpq.so.5 with the name libpq.so.
|
||||||
|
# This is only provided by the libpq-dev package which can't be installed for both arch at the same time.
|
||||||
|
# Without this specific file the ld command will fail and compilation fails with it.
|
||||||
|
&& ln -sfnr /usr/lib/{{ package_cross_compiler }}/libpq.so.5 /usr/lib/{{ package_cross_compiler }}/libpq.so \
|
||||||
|
#
|
||||||
|
# Make sure cargo has the right target config
|
||||||
|
&& echo '[target.{{ package_arch_target }}]' >> "${CARGO_HOME}/config" \
|
||||||
|
&& echo 'linker = "{{ package_cross_compiler }}-gcc"' >> "${CARGO_HOME}/config" \
|
||||||
|
&& echo 'rustflags = ["-L/usr/lib/{{ package_cross_compiler }}"]' >> "${CARGO_HOME}/config"
|
||||||
|
|
||||||
ENV CARGO_HOME "/root/.cargo"
|
# Set arm specific environment values
|
||||||
ENV USER "root"
|
ENV CC_{{ package_arch_target | replace("-", "_") }}="/usr/bin/{{ package_cross_compiler }}-gcc"
|
||||||
{% endif -%}
|
ENV CROSS_COMPILE="1"
|
||||||
|
ENV OPENSSL_INCLUDE_DIR="/usr/include/{{ package_cross_compiler }}"
|
||||||
|
ENV OPENSSL_LIB_DIR="/usr/lib/{{ package_cross_compiler }}"
|
||||||
|
|
||||||
{% if "amd64" in target_file and "alpine" not in target_file %}
|
{% elif "amd64" in target_file %}
|
||||||
# Install DB packages
|
# Install DB packages
|
||||||
RUN apt-get update && apt-get install -y \
|
RUN apt-get update \
|
||||||
|
&& apt-get install -y \
|
||||||
--no-install-recommends \
|
--no-install-recommends \
|
||||||
libmariadb-dev{{ package_arch_prefix }} \
|
libmariadb-dev{{ package_arch_prefix }} \
|
||||||
libpq-dev{{ package_arch_prefix }} \
|
libpq-dev{{ package_arch_prefix }} \
|
||||||
|
&& apt-get clean \
|
||||||
&& rm -rf /var/lib/apt/lists/*
|
&& rm -rf /var/lib/apt/lists/*
|
||||||
{% endif %}
|
{% endif %}
|
||||||
|
|
||||||
@@ -140,37 +184,14 @@ COPY ./Cargo.* ./
|
|||||||
COPY ./rust-toolchain ./rust-toolchain
|
COPY ./rust-toolchain ./rust-toolchain
|
||||||
COPY ./build.rs ./build.rs
|
COPY ./build.rs ./build.rs
|
||||||
|
|
||||||
{% if "alpine" not in target_file %}
|
|
||||||
{% if "arm" in target_file %}
|
|
||||||
# NOTE: This should be the last apt-get/dpkg for this stage, since after this it will fail because of broken dependencies.
|
|
||||||
# For Diesel-RS migrations_macros to compile with MySQL/MariaDB we need to do some magic.
|
|
||||||
# We at least need libmariadb3:amd64 installed for the x86_64 version of libmariadb.so (client)
|
|
||||||
# We also need the libmariadb-dev-compat:amd64 but it can not be installed together with the {{ package_arch_prefix }} version.
|
|
||||||
# What we can do is a force install, because nothing important is overlapping each other.
|
|
||||||
RUN apt-get install -y --no-install-recommends libmariadb3:amd64 \
|
|
||||||
&& apt-get download libmariadb-dev-compat:amd64 \
|
|
||||||
&& dpkg --force-all -i ./libmariadb-dev-compat*.deb \
|
|
||||||
&& rm -rvf ./libmariadb-dev-compat*.deb \
|
|
||||||
# For Diesel-RS migrations_macros to compile with PostgreSQL we need to do some magic.
|
|
||||||
# The libpq5{{ package_arch_prefix }} package seems to not provide a symlink to libpq.so.5 with the name libpq.so.
|
|
||||||
# This is only provided by the libpq-dev package which can't be installed for both arch at the same time.
|
|
||||||
# Without this specific file the ld command will fail and compilation fails with it.
|
|
||||||
&& ln -sfnr /usr/lib/{{ package_cross_compiler }}/libpq.so.5 /usr/lib/{{ package_cross_compiler }}/libpq.so
|
|
||||||
|
|
||||||
ENV CC_{{ package_arch_target | replace("-", "_") }}="/usr/bin/{{ package_cross_compiler }}-gcc"
|
|
||||||
ENV CROSS_COMPILE="1"
|
|
||||||
ENV OPENSSL_INCLUDE_DIR="/usr/include/{{ package_cross_compiler }}"
|
|
||||||
ENV OPENSSL_LIB_DIR="/usr/lib/{{ package_cross_compiler }}"
|
|
||||||
{% endif -%}
|
|
||||||
{% endif %}
|
|
||||||
{% if package_arch_target is defined %}
|
{% if package_arch_target is defined %}
|
||||||
RUN rustup target add {{ package_arch_target }}
|
RUN {{ mount_rust_cache -}} rustup target add {{ package_arch_target }}
|
||||||
{% endif %}
|
{% endif %}
|
||||||
|
|
||||||
# Builds your dependencies and removes the
|
# Builds your dependencies and removes the
|
||||||
# dummy project, except the target folder
|
# dummy project, except the target folder
|
||||||
# This folder contains the compiled dependencies
|
# This folder contains the compiled dependencies
|
||||||
RUN cargo build --features ${DB} --release{{ package_arch_target_param }} \
|
RUN {{ mount_rust_cache -}} cargo build --features ${DB} --release{{ package_arch_target_param }} \
|
||||||
&& find . -not -path "./target*" -delete
|
&& find . -not -path "./target*" -delete
|
||||||
|
|
||||||
# Copies the complete project
|
# Copies the complete project
|
||||||
@@ -182,7 +203,7 @@ RUN touch src/main.rs
|
|||||||
|
|
||||||
# Builds again, this time it'll just be
|
# Builds again, this time it'll just be
|
||||||
# your actual source files being built
|
# your actual source files being built
|
||||||
RUN cargo build --features ${DB} --release{{ package_arch_target_param }}
|
RUN {{ mount_rust_cache -}} cargo build --features ${DB} --release{{ package_arch_target_param }}
|
||||||
{% if "alpine" in target_file %}
|
{% if "alpine" in target_file %}
|
||||||
{% if "armv7" in target_file %}
|
{% if "armv7" in target_file %}
|
||||||
# hadolint ignore=DL3059
|
# hadolint ignore=DL3059
|
||||||
@@ -212,6 +233,7 @@ RUN mkdir /data \
|
|||||||
{% if "alpine" in runtime_stage_base_image %}
|
{% if "alpine" in runtime_stage_base_image %}
|
||||||
&& apk add --no-cache \
|
&& apk add --no-cache \
|
||||||
openssl \
|
openssl \
|
||||||
|
tzdata \
|
||||||
curl \
|
curl \
|
||||||
dumb-init \
|
dumb-init \
|
||||||
{% if "mysql" in features %}
|
{% if "mysql" in features %}
|
||||||
@@ -230,6 +252,7 @@ RUN mkdir /data \
|
|||||||
dumb-init \
|
dumb-init \
|
||||||
libmariadb-dev-compat \
|
libmariadb-dev-compat \
|
||||||
libpq5 \
|
libpq5 \
|
||||||
|
&& apt-get clean \
|
||||||
&& rm -rf /var/lib/apt/lists/*
|
&& rm -rf /var/lib/apt/lists/*
|
||||||
{% endif %}
|
{% endif %}
|
||||||
|
|
||||||
|
|||||||
@@ -7,3 +7,9 @@ all: $(OBJECTS)
|
|||||||
|
|
||||||
%/Dockerfile.alpine: Dockerfile.j2 render_template
|
%/Dockerfile.alpine: Dockerfile.j2 render_template
|
||||||
./render_template "$<" "{\"target_file\":\"$@\"}" > "$@"
|
./render_template "$<" "{\"target_file\":\"$@\"}" > "$@"
|
||||||
|
|
||||||
|
%/Dockerfile.buildx: Dockerfile.j2 render_template
|
||||||
|
./render_template "$<" "{\"target_file\":\"$@\"}" > "$@"
|
||||||
|
|
||||||
|
%/Dockerfile.buildx.alpine: Dockerfile.j2 render_template
|
||||||
|
./render_template "$<" "{\"target_file\":\"$@\"}" > "$@"
|
||||||
|
|||||||
@@ -1,3 +1,5 @@
|
|||||||
|
# syntax=docker/dockerfile:1
|
||||||
|
|
||||||
# This file was generated using a Jinja2 template.
|
# This file was generated using a Jinja2 template.
|
||||||
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
|
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
|
||||||
|
|
||||||
@@ -14,33 +16,42 @@
|
|||||||
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
|
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
|
||||||
# click the tag name to view the digest of the image it currently points to.
|
# click the tag name to view the digest of the image it currently points to.
|
||||||
# - From the command line:
|
# - From the command line:
|
||||||
# $ docker pull vaultwarden/web-vault:v2.21.1
|
# $ docker pull vaultwarden/web-vault:v2.25.0
|
||||||
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.21.1
|
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.25.0
|
||||||
# [vaultwarden/web-vault@sha256:29a4fa7bf3790fff9d908b02ac5a154913491f4bf30c95b87b06d8cf1c5516b5]
|
# [vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527]
|
||||||
#
|
#
|
||||||
# - Conversely, to get the tag name from the digest:
|
# - Conversely, to get the tag name from the digest:
|
||||||
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:29a4fa7bf3790fff9d908b02ac5a154913491f4bf30c95b87b06d8cf1c5516b5
|
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527
|
||||||
# [vaultwarden/web-vault:v2.21.1]
|
# [vaultwarden/web-vault:v2.25.0]
|
||||||
#
|
#
|
||||||
FROM vaultwarden/web-vault@sha256:29a4fa7bf3790fff9d908b02ac5a154913491f4bf30c95b87b06d8cf1c5516b5 as vault
|
FROM vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527 as vault
|
||||||
|
|
||||||
########################## BUILD IMAGE ##########################
|
########################## BUILD IMAGE ##########################
|
||||||
FROM rust:1.53 as build
|
FROM rust:1.55-buster as build
|
||||||
|
|
||||||
# Debian-based builds support multidb
|
# Debian-based builds support multidb
|
||||||
ARG DB=sqlite,mysql,postgresql
|
ARG DB=sqlite,mysql,postgresql
|
||||||
|
|
||||||
# Build time options to avoid dpkg warnings and help with reproducible builds.
|
# Build time options to avoid dpkg warnings and help with reproducible builds.
|
||||||
ENV DEBIAN_FRONTEND=noninteractive LANG=C.UTF-8 TZ=UTC TERM=xterm-256color
|
ENV DEBIAN_FRONTEND=noninteractive \
|
||||||
|
LANG=C.UTF-8 \
|
||||||
|
TZ=UTC \
|
||||||
|
TERM=xterm-256color \
|
||||||
|
CARGO_HOME="/root/.cargo" \
|
||||||
|
USER="root"
|
||||||
|
|
||||||
# Don't download rust docs
|
|
||||||
RUN rustup set profile minimal
|
# Create CARGO_HOME folder and don't download rust docs
|
||||||
|
RUN mkdir -pv "${CARGO_HOME}" \
|
||||||
|
&& rustup set profile minimal
|
||||||
|
|
||||||
# Install DB packages
|
# Install DB packages
|
||||||
RUN apt-get update && apt-get install -y \
|
RUN apt-get update \
|
||||||
|
&& apt-get install -y \
|
||||||
--no-install-recommends \
|
--no-install-recommends \
|
||||||
libmariadb-dev \
|
libmariadb-dev \
|
||||||
libpq-dev \
|
libpq-dev \
|
||||||
|
&& apt-get clean \
|
||||||
&& rm -rf /var/lib/apt/lists/*
|
&& rm -rf /var/lib/apt/lists/*
|
||||||
|
|
||||||
# Creates a dummy project used to grab dependencies
|
# Creates a dummy project used to grab dependencies
|
||||||
@@ -90,6 +101,7 @@ RUN mkdir /data \
|
|||||||
dumb-init \
|
dumb-init \
|
||||||
libmariadb-dev-compat \
|
libmariadb-dev-compat \
|
||||||
libpq5 \
|
libpq5 \
|
||||||
|
&& apt-get clean \
|
||||||
&& rm -rf /var/lib/apt/lists/*
|
&& rm -rf /var/lib/apt/lists/*
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -1,3 +1,5 @@
|
|||||||
|
# syntax=docker/dockerfile:1
|
||||||
|
|
||||||
# This file was generated using a Jinja2 template.
|
# This file was generated using a Jinja2 template.
|
||||||
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
|
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
|
||||||
|
|
||||||
@@ -14,29 +16,35 @@
|
|||||||
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
|
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
|
||||||
# click the tag name to view the digest of the image it currently points to.
|
# click the tag name to view the digest of the image it currently points to.
|
||||||
# - From the command line:
|
# - From the command line:
|
||||||
# $ docker pull vaultwarden/web-vault:v2.21.1
|
# $ docker pull vaultwarden/web-vault:v2.25.0
|
||||||
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.21.1
|
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.25.0
|
||||||
# [vaultwarden/web-vault@sha256:29a4fa7bf3790fff9d908b02ac5a154913491f4bf30c95b87b06d8cf1c5516b5]
|
# [vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527]
|
||||||
#
|
#
|
||||||
# - Conversely, to get the tag name from the digest:
|
# - Conversely, to get the tag name from the digest:
|
||||||
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:29a4fa7bf3790fff9d908b02ac5a154913491f4bf30c95b87b06d8cf1c5516b5
|
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527
|
||||||
# [vaultwarden/web-vault:v2.21.1]
|
# [vaultwarden/web-vault:v2.25.0]
|
||||||
#
|
#
|
||||||
FROM vaultwarden/web-vault@sha256:29a4fa7bf3790fff9d908b02ac5a154913491f4bf30c95b87b06d8cf1c5516b5 as vault
|
FROM vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527 as vault
|
||||||
|
|
||||||
########################## BUILD IMAGE ##########################
|
########################## BUILD IMAGE ##########################
|
||||||
FROM clux/muslrust:nightly-2021-06-24 as build
|
FROM clux/muslrust:nightly-2021-10-23 as build
|
||||||
|
|
||||||
# Alpine-based AMD64 (musl) does not support mysql/mariadb during compile time.
|
# Alpine-based AMD64 (musl) does not support mysql/mariadb during compile time.
|
||||||
ARG DB=sqlite,postgresql
|
ARG DB=sqlite,postgresql
|
||||||
|
|
||||||
# Build time options to avoid dpkg warnings and help with reproducible builds.
|
# Build time options to avoid dpkg warnings and help with reproducible builds.
|
||||||
ENV DEBIAN_FRONTEND=noninteractive LANG=C.UTF-8 TZ=UTC TERM=xterm-256color
|
ENV DEBIAN_FRONTEND=noninteractive \
|
||||||
|
LANG=C.UTF-8 \
|
||||||
|
TZ=UTC \
|
||||||
|
TERM=xterm-256color \
|
||||||
|
CARGO_HOME="/root/.cargo" \
|
||||||
|
USER="root"
|
||||||
|
|
||||||
# Don't download rust docs
|
|
||||||
RUN rustup set profile minimal
|
|
||||||
|
|
||||||
ENV USER "root"
|
# Create CARGO_HOME folder and don't download rust docs
|
||||||
|
RUN mkdir -pv "${CARGO_HOME}" \
|
||||||
|
&& rustup set profile minimal
|
||||||
|
|
||||||
ENV RUSTFLAGS='-C link-arg=-s'
|
ENV RUSTFLAGS='-C link-arg=-s'
|
||||||
|
|
||||||
# Creates a dummy project used to grab dependencies
|
# Creates a dummy project used to grab dependencies
|
||||||
@@ -82,6 +90,7 @@ ENV SSL_CERT_DIR=/etc/ssl/certs
|
|||||||
RUN mkdir /data \
|
RUN mkdir /data \
|
||||||
&& apk add --no-cache \
|
&& apk add --no-cache \
|
||||||
openssl \
|
openssl \
|
||||||
|
tzdata \
|
||||||
curl \
|
curl \
|
||||||
dumb-init \
|
dumb-init \
|
||||||
postgresql-libs \
|
postgresql-libs \
|
||||||
|
|||||||
126
docker/amd64/Dockerfile.buildx
Normal file
126
docker/amd64/Dockerfile.buildx
Normal file
@@ -0,0 +1,126 @@
|
|||||||
|
# syntax=docker/dockerfile:1
|
||||||
|
|
||||||
|
# This file was generated using a Jinja2 template.
|
||||||
|
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
|
||||||
|
|
||||||
|
# Using multistage build:
|
||||||
|
# https://docs.docker.com/develop/develop-images/multistage-build/
|
||||||
|
# https://whitfin.io/speeding-up-rust-docker-builds/
|
||||||
|
####################### VAULT BUILD IMAGE #######################
|
||||||
|
# The web-vault digest specifies a particular web-vault build on Docker Hub.
|
||||||
|
# Using the digest instead of the tag name provides better security,
|
||||||
|
# as the digest of an image is immutable, whereas a tag name can later
|
||||||
|
# be changed to point to a malicious image.
|
||||||
|
#
|
||||||
|
# To verify the current digest for a given tag name:
|
||||||
|
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
|
||||||
|
# click the tag name to view the digest of the image it currently points to.
|
||||||
|
# - From the command line:
|
||||||
|
# $ docker pull vaultwarden/web-vault:v2.25.0
|
||||||
|
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.25.0
|
||||||
|
# [vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527]
|
||||||
|
#
|
||||||
|
# - Conversely, to get the tag name from the digest:
|
||||||
|
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527
|
||||||
|
# [vaultwarden/web-vault:v2.25.0]
|
||||||
|
#
|
||||||
|
FROM vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527 as vault
|
||||||
|
|
||||||
|
########################## BUILD IMAGE ##########################
|
||||||
|
FROM rust:1.55-buster as build
|
||||||
|
|
||||||
|
# Debian-based builds support multidb
|
||||||
|
ARG DB=sqlite,mysql,postgresql
|
||||||
|
|
||||||
|
# Build time options to avoid dpkg warnings and help with reproducible builds.
|
||||||
|
ENV DEBIAN_FRONTEND=noninteractive \
|
||||||
|
LANG=C.UTF-8 \
|
||||||
|
TZ=UTC \
|
||||||
|
TERM=xterm-256color \
|
||||||
|
CARGO_HOME="/root/.cargo" \
|
||||||
|
USER="root"
|
||||||
|
|
||||||
|
|
||||||
|
# Create CARGO_HOME folder and don't download rust docs
|
||||||
|
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry mkdir -pv "${CARGO_HOME}" \
|
||||||
|
&& rustup set profile minimal
|
||||||
|
|
||||||
|
# Install DB packages
|
||||||
|
RUN apt-get update \
|
||||||
|
&& apt-get install -y \
|
||||||
|
--no-install-recommends \
|
||||||
|
libmariadb-dev \
|
||||||
|
libpq-dev \
|
||||||
|
&& apt-get clean \
|
||||||
|
&& rm -rf /var/lib/apt/lists/*
|
||||||
|
|
||||||
|
# Creates a dummy project used to grab dependencies
|
||||||
|
RUN USER=root cargo new --bin /app
|
||||||
|
WORKDIR /app
|
||||||
|
|
||||||
|
# Copies over *only* your manifests and build files
|
||||||
|
COPY ./Cargo.* ./
|
||||||
|
COPY ./rust-toolchain ./rust-toolchain
|
||||||
|
COPY ./build.rs ./build.rs
|
||||||
|
|
||||||
|
|
||||||
|
# Builds your dependencies and removes the
|
||||||
|
# dummy project, except the target folder
|
||||||
|
# This folder contains the compiled dependencies
|
||||||
|
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release \
|
||||||
|
&& find . -not -path "./target*" -delete
|
||||||
|
|
||||||
|
# Copies the complete project
|
||||||
|
# To avoid copying unneeded files, use .dockerignore
|
||||||
|
COPY . .
|
||||||
|
|
||||||
|
# Make sure that we actually build the project
|
||||||
|
RUN touch src/main.rs
|
||||||
|
|
||||||
|
# Builds again, this time it'll just be
|
||||||
|
# your actual source files being built
|
||||||
|
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release
|
||||||
|
|
||||||
|
######################## RUNTIME IMAGE ########################
|
||||||
|
# Create a new stage with a minimal image
|
||||||
|
# because we already have a binary built
|
||||||
|
FROM debian:buster-slim
|
||||||
|
|
||||||
|
ENV ROCKET_ENV "staging"
|
||||||
|
ENV ROCKET_PORT=80
|
||||||
|
ENV ROCKET_WORKERS=10
|
||||||
|
|
||||||
|
|
||||||
|
# Create data folder and Install needed libraries
|
||||||
|
RUN mkdir /data \
|
||||||
|
&& apt-get update && apt-get install -y \
|
||||||
|
--no-install-recommends \
|
||||||
|
openssl \
|
||||||
|
ca-certificates \
|
||||||
|
curl \
|
||||||
|
dumb-init \
|
||||||
|
libmariadb-dev-compat \
|
||||||
|
libpq5 \
|
||||||
|
&& apt-get clean \
|
||||||
|
&& rm -rf /var/lib/apt/lists/*
|
||||||
|
|
||||||
|
|
||||||
|
VOLUME /data
|
||||||
|
EXPOSE 80
|
||||||
|
EXPOSE 3012
|
||||||
|
|
||||||
|
# Copies the files from the context (Rocket.toml file and web-vault)
|
||||||
|
# and the binary from the "build" stage to the current stage
|
||||||
|
WORKDIR /
|
||||||
|
COPY Rocket.toml .
|
||||||
|
COPY --from=vault /web-vault ./web-vault
|
||||||
|
COPY --from=build /app/target/release/vaultwarden .
|
||||||
|
|
||||||
|
COPY docker/healthcheck.sh /healthcheck.sh
|
||||||
|
COPY docker/start.sh /start.sh
|
||||||
|
|
||||||
|
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
|
||||||
|
|
||||||
|
# Configures the startup!
|
||||||
|
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
|
||||||
|
CMD ["/start.sh"]
|
||||||
118
docker/amd64/Dockerfile.buildx.alpine
Normal file
118
docker/amd64/Dockerfile.buildx.alpine
Normal file
@@ -0,0 +1,118 @@
|
|||||||
|
# syntax=docker/dockerfile:1
|
||||||
|
|
||||||
|
# This file was generated using a Jinja2 template.
|
||||||
|
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
|
||||||
|
|
||||||
|
# Using multistage build:
|
||||||
|
# https://docs.docker.com/develop/develop-images/multistage-build/
|
||||||
|
# https://whitfin.io/speeding-up-rust-docker-builds/
|
||||||
|
####################### VAULT BUILD IMAGE #######################
|
||||||
|
# The web-vault digest specifies a particular web-vault build on Docker Hub.
|
||||||
|
# Using the digest instead of the tag name provides better security,
|
||||||
|
# as the digest of an image is immutable, whereas a tag name can later
|
||||||
|
# be changed to point to a malicious image.
|
||||||
|
#
|
||||||
|
# To verify the current digest for a given tag name:
|
||||||
|
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
|
||||||
|
# click the tag name to view the digest of the image it currently points to.
|
||||||
|
# - From the command line:
|
||||||
|
# $ docker pull vaultwarden/web-vault:v2.25.0
|
||||||
|
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.25.0
|
||||||
|
# [vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527]
|
||||||
|
#
|
||||||
|
# - Conversely, to get the tag name from the digest:
|
||||||
|
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527
|
||||||
|
# [vaultwarden/web-vault:v2.25.0]
|
||||||
|
#
|
||||||
|
FROM vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527 as vault
|
||||||
|
|
||||||
|
########################## BUILD IMAGE ##########################
|
||||||
|
FROM clux/muslrust:nightly-2021-10-23 as build
|
||||||
|
|
||||||
|
# Alpine-based AMD64 (musl) does not support mysql/mariadb during compile time.
|
||||||
|
ARG DB=sqlite,postgresql
|
||||||
|
|
||||||
|
# Build time options to avoid dpkg warnings and help with reproducible builds.
|
||||||
|
ENV DEBIAN_FRONTEND=noninteractive \
|
||||||
|
LANG=C.UTF-8 \
|
||||||
|
TZ=UTC \
|
||||||
|
TERM=xterm-256color \
|
||||||
|
CARGO_HOME="/root/.cargo" \
|
||||||
|
USER="root"
|
||||||
|
|
||||||
|
|
||||||
|
# Create CARGO_HOME folder and don't download rust docs
|
||||||
|
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry mkdir -pv "${CARGO_HOME}" \
|
||||||
|
&& rustup set profile minimal
|
||||||
|
|
||||||
|
ENV RUSTFLAGS='-C link-arg=-s'
|
||||||
|
|
||||||
|
# Creates a dummy project used to grab dependencies
|
||||||
|
RUN USER=root cargo new --bin /app
|
||||||
|
WORKDIR /app
|
||||||
|
|
||||||
|
# Copies over *only* your manifests and build files
|
||||||
|
COPY ./Cargo.* ./
|
||||||
|
COPY ./rust-toolchain ./rust-toolchain
|
||||||
|
COPY ./build.rs ./build.rs
|
||||||
|
|
||||||
|
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry rustup target add x86_64-unknown-linux-musl
|
||||||
|
|
||||||
|
# Builds your dependencies and removes the
|
||||||
|
# dummy project, except the target folder
|
||||||
|
# This folder contains the compiled dependencies
|
||||||
|
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=x86_64-unknown-linux-musl \
|
||||||
|
&& find . -not -path "./target*" -delete
|
||||||
|
|
||||||
|
# Copies the complete project
|
||||||
|
# To avoid copying unneeded files, use .dockerignore
|
||||||
|
COPY . .
|
||||||
|
|
||||||
|
# Make sure that we actually build the project
|
||||||
|
RUN touch src/main.rs
|
||||||
|
|
||||||
|
# Builds again, this time it'll just be
|
||||||
|
# your actual source files being built
|
||||||
|
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=x86_64-unknown-linux-musl
|
||||||
|
|
||||||
|
######################## RUNTIME IMAGE ########################
|
||||||
|
# Create a new stage with a minimal image
|
||||||
|
# because we already have a binary built
|
||||||
|
FROM alpine:3.14
|
||||||
|
|
||||||
|
ENV ROCKET_ENV "staging"
|
||||||
|
ENV ROCKET_PORT=80
|
||||||
|
ENV ROCKET_WORKERS=10
|
||||||
|
ENV SSL_CERT_DIR=/etc/ssl/certs
|
||||||
|
|
||||||
|
|
||||||
|
# Create data folder and Install needed libraries
|
||||||
|
RUN mkdir /data \
|
||||||
|
&& apk add --no-cache \
|
||||||
|
openssl \
|
||||||
|
tzdata \
|
||||||
|
curl \
|
||||||
|
dumb-init \
|
||||||
|
postgresql-libs \
|
||||||
|
ca-certificates
|
||||||
|
|
||||||
|
|
||||||
|
VOLUME /data
|
||||||
|
EXPOSE 80
|
||||||
|
EXPOSE 3012
|
||||||
|
|
||||||
|
# Copies the files from the context (Rocket.toml file and web-vault)
|
||||||
|
# and the binary from the "build" stage to the current stage
|
||||||
|
WORKDIR /
|
||||||
|
COPY Rocket.toml .
|
||||||
|
COPY --from=vault /web-vault ./web-vault
|
||||||
|
COPY --from=build /app/target/x86_64-unknown-linux-musl/release/vaultwarden .
|
||||||
|
|
||||||
|
COPY docker/healthcheck.sh /healthcheck.sh
|
||||||
|
COPY docker/start.sh /start.sh
|
||||||
|
|
||||||
|
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
|
||||||
|
|
||||||
|
# Configures the startup!
|
||||||
|
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
|
||||||
|
CMD ["/start.sh"]
|
||||||
@@ -1,3 +1,5 @@
|
|||||||
|
# syntax=docker/dockerfile:1
|
||||||
|
|
||||||
# This file was generated using a Jinja2 template.
|
# This file was generated using a Jinja2 template.
|
||||||
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
|
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
|
||||||
|
|
||||||
@@ -14,32 +16,44 @@
|
|||||||
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
|
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
|
||||||
# click the tag name to view the digest of the image it currently points to.
|
# click the tag name to view the digest of the image it currently points to.
|
||||||
# - From the command line:
|
# - From the command line:
|
||||||
# $ docker pull vaultwarden/web-vault:v2.21.1
|
# $ docker pull vaultwarden/web-vault:v2.25.0
|
||||||
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.21.1
|
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.25.0
|
||||||
# [vaultwarden/web-vault@sha256:29a4fa7bf3790fff9d908b02ac5a154913491f4bf30c95b87b06d8cf1c5516b5]
|
# [vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527]
|
||||||
#
|
#
|
||||||
# - Conversely, to get the tag name from the digest:
|
# - Conversely, to get the tag name from the digest:
|
||||||
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:29a4fa7bf3790fff9d908b02ac5a154913491f4bf30c95b87b06d8cf1c5516b5
|
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527
|
||||||
# [vaultwarden/web-vault:v2.21.1]
|
# [vaultwarden/web-vault:v2.25.0]
|
||||||
#
|
#
|
||||||
FROM vaultwarden/web-vault@sha256:29a4fa7bf3790fff9d908b02ac5a154913491f4bf30c95b87b06d8cf1c5516b5 as vault
|
FROM vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527 as vault
|
||||||
|
|
||||||
########################## BUILD IMAGE ##########################
|
########################## BUILD IMAGE ##########################
|
||||||
FROM rust:1.53 as build
|
FROM rust:1.55-buster as build
|
||||||
|
|
||||||
# Debian-based builds support multidb
|
# Debian-based builds support multidb
|
||||||
ARG DB=sqlite,mysql,postgresql
|
ARG DB=sqlite,mysql,postgresql
|
||||||
|
|
||||||
# Build time options to avoid dpkg warnings and help with reproducible builds.
|
# Build time options to avoid dpkg warnings and help with reproducible builds.
|
||||||
ENV DEBIAN_FRONTEND=noninteractive LANG=C.UTF-8 TZ=UTC TERM=xterm-256color
|
ENV DEBIAN_FRONTEND=noninteractive \
|
||||||
|
LANG=C.UTF-8 \
|
||||||
|
TZ=UTC \
|
||||||
|
TERM=xterm-256color \
|
||||||
|
CARGO_HOME="/root/.cargo" \
|
||||||
|
USER="root"
|
||||||
|
|
||||||
# Don't download rust docs
|
|
||||||
RUN rustup set profile minimal
|
|
||||||
|
|
||||||
|
# Create CARGO_HOME folder and don't download rust docs
|
||||||
|
RUN mkdir -pv "${CARGO_HOME}" \
|
||||||
|
&& rustup set profile minimal
|
||||||
|
|
||||||
|
# NOTE: Any apt-get/dpkg after this stage will fail because of broken dependencies.
|
||||||
|
# For Diesel-RS migrations_macros to compile with MySQL/MariaDB we need to do some magic.
|
||||||
|
# We at least need libmariadb3:amd64 installed for the x86_64 version of libmariadb.so (client)
|
||||||
|
# We also need the libmariadb-dev-compat:amd64 but it can not be installed together with the :arm64 version.
|
||||||
|
# What we can do is a force install, because nothing important is overlapping each other.
|
||||||
|
#
|
||||||
# Install required build libs for arm64 architecture.
|
# Install required build libs for arm64 architecture.
|
||||||
# To compile both mysql and postgresql we need some extra packages for both host arch and target arch
|
# To compile both mysql and postgresql we need some extra packages for both host arch and target arch
|
||||||
RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > \
|
RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > /etc/apt/sources.list.d/deb-src.list \
|
||||||
/etc/apt/sources.list.d/deb-src.list \
|
|
||||||
&& dpkg --add-architecture arm64 \
|
&& dpkg --add-architecture arm64 \
|
||||||
&& apt-get update \
|
&& apt-get update \
|
||||||
&& apt-get install -y \
|
&& apt-get install -y \
|
||||||
@@ -48,16 +62,35 @@ RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > \
|
|||||||
libc6-dev:arm64 \
|
libc6-dev:arm64 \
|
||||||
libpq5:arm64 \
|
libpq5:arm64 \
|
||||||
libpq-dev \
|
libpq-dev \
|
||||||
|
libmariadb3:amd64 \
|
||||||
libmariadb-dev:arm64 \
|
libmariadb-dev:arm64 \
|
||||||
libmariadb-dev-compat:arm64 \
|
libmariadb-dev-compat:arm64 \
|
||||||
gcc-aarch64-linux-gnu \
|
gcc-aarch64-linux-gnu \
|
||||||
&& mkdir -p ~/.cargo \
|
#
|
||||||
&& echo '[target.aarch64-unknown-linux-gnu]' >> ~/.cargo/config \
|
# Manual install libmariadb-dev-compat:amd64 ( After this broken dependencies will break apt )
|
||||||
&& echo 'linker = "aarch64-linux-gnu-gcc"' >> ~/.cargo/config \
|
&& apt-get download libmariadb-dev-compat:amd64 \
|
||||||
&& echo 'rustflags = ["-L/usr/lib/aarch64-linux-gnu"]' >> ~/.cargo/config
|
&& dpkg --force-all -i ./libmariadb-dev-compat*.deb \
|
||||||
|
&& rm -rvf ./libmariadb-dev-compat*.deb \
|
||||||
|
&& apt-get clean \
|
||||||
|
&& rm -rf /var/lib/apt/lists/* \
|
||||||
|
#
|
||||||
|
# For Diesel-RS migrations_macros to compile with PostgreSQL we need to do some magic.
|
||||||
|
# The libpq5:arm64 package seems to not provide a symlink to libpq.so.5 with the name libpq.so.
|
||||||
|
# This is only provided by the libpq-dev package which can't be installed for both arch at the same time.
|
||||||
|
# Without this specific file the ld command will fail and compilation fails with it.
|
||||||
|
&& ln -sfnr /usr/lib/aarch64-linux-gnu/libpq.so.5 /usr/lib/aarch64-linux-gnu/libpq.so \
|
||||||
|
#
|
||||||
|
# Make sure cargo has the right target config
|
||||||
|
&& echo '[target.aarch64-unknown-linux-gnu]' >> "${CARGO_HOME}/config" \
|
||||||
|
&& echo 'linker = "aarch64-linux-gnu-gcc"' >> "${CARGO_HOME}/config" \
|
||||||
|
&& echo 'rustflags = ["-L/usr/lib/aarch64-linux-gnu"]' >> "${CARGO_HOME}/config"
|
||||||
|
|
||||||
|
# Set arm specific environment values
|
||||||
|
ENV CC_aarch64_unknown_linux_gnu="/usr/bin/aarch64-linux-gnu-gcc"
|
||||||
|
ENV CROSS_COMPILE="1"
|
||||||
|
ENV OPENSSL_INCLUDE_DIR="/usr/include/aarch64-linux-gnu"
|
||||||
|
ENV OPENSSL_LIB_DIR="/usr/lib/aarch64-linux-gnu"
|
||||||
|
|
||||||
ENV CARGO_HOME "/root/.cargo"
|
|
||||||
ENV USER "root"
|
|
||||||
|
|
||||||
# Creates a dummy project used to grab dependencies
|
# Creates a dummy project used to grab dependencies
|
||||||
RUN USER=root cargo new --bin /app
|
RUN USER=root cargo new --bin /app
|
||||||
@@ -68,25 +101,6 @@ COPY ./Cargo.* ./
|
|||||||
COPY ./rust-toolchain ./rust-toolchain
|
COPY ./rust-toolchain ./rust-toolchain
|
||||||
COPY ./build.rs ./build.rs
|
COPY ./build.rs ./build.rs
|
||||||
|
|
||||||
# NOTE: This should be the last apt-get/dpkg for this stage, since after this it will fail because of broken dependencies.
|
|
||||||
# For Diesel-RS migrations_macros to compile with MySQL/MariaDB we need to do some magic.
|
|
||||||
# We at least need libmariadb3:amd64 installed for the x86_64 version of libmariadb.so (client)
|
|
||||||
# We also need the libmariadb-dev-compat:amd64 but it can not be installed together with the :arm64 version.
|
|
||||||
# What we can do is a force install, because nothing important is overlapping each other.
|
|
||||||
RUN apt-get install -y --no-install-recommends libmariadb3:amd64 \
|
|
||||||
&& apt-get download libmariadb-dev-compat:amd64 \
|
|
||||||
&& dpkg --force-all -i ./libmariadb-dev-compat*.deb \
|
|
||||||
&& rm -rvf ./libmariadb-dev-compat*.deb \
|
|
||||||
# For Diesel-RS migrations_macros to compile with PostgreSQL we need to do some magic.
|
|
||||||
# The libpq5:arm64 package seems to not provide a symlink to libpq.so.5 with the name libpq.so.
|
|
||||||
# This is only provided by the libpq-dev package which can't be installed for both arch at the same time.
|
|
||||||
# Without this specific file the ld command will fail and compilation fails with it.
|
|
||||||
&& ln -sfnr /usr/lib/aarch64-linux-gnu/libpq.so.5 /usr/lib/aarch64-linux-gnu/libpq.so
|
|
||||||
|
|
||||||
ENV CC_aarch64_unknown_linux_gnu="/usr/bin/aarch64-linux-gnu-gcc"
|
|
||||||
ENV CROSS_COMPILE="1"
|
|
||||||
ENV OPENSSL_INCLUDE_DIR="/usr/include/aarch64-linux-gnu"
|
|
||||||
ENV OPENSSL_LIB_DIR="/usr/lib/aarch64-linux-gnu"
|
|
||||||
RUN rustup target add aarch64-unknown-linux-gnu
|
RUN rustup target add aarch64-unknown-linux-gnu
|
||||||
|
|
||||||
# Builds your dependencies and removes the
|
# Builds your dependencies and removes the
|
||||||
@@ -128,6 +142,7 @@ RUN mkdir /data \
|
|||||||
dumb-init \
|
dumb-init \
|
||||||
libmariadb-dev-compat \
|
libmariadb-dev-compat \
|
||||||
libpq5 \
|
libpq5 \
|
||||||
|
&& apt-get clean \
|
||||||
&& rm -rf /var/lib/apt/lists/*
|
&& rm -rf /var/lib/apt/lists/*
|
||||||
|
|
||||||
# hadolint ignore=DL3059
|
# hadolint ignore=DL3059
|
||||||
|
|||||||
169
docker/arm64/Dockerfile.buildx
Normal file
169
docker/arm64/Dockerfile.buildx
Normal file
@@ -0,0 +1,169 @@
|
|||||||
|
# syntax=docker/dockerfile:1
|
||||||
|
|
||||||
|
# This file was generated using a Jinja2 template.
|
||||||
|
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
|
||||||
|
|
||||||
|
# Using multistage build:
|
||||||
|
# https://docs.docker.com/develop/develop-images/multistage-build/
|
||||||
|
# https://whitfin.io/speeding-up-rust-docker-builds/
|
||||||
|
####################### VAULT BUILD IMAGE #######################
|
||||||
|
# The web-vault digest specifies a particular web-vault build on Docker Hub.
|
||||||
|
# Using the digest instead of the tag name provides better security,
|
||||||
|
# as the digest of an image is immutable, whereas a tag name can later
|
||||||
|
# be changed to point to a malicious image.
|
||||||
|
#
|
||||||
|
# To verify the current digest for a given tag name:
|
||||||
|
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
|
||||||
|
# click the tag name to view the digest of the image it currently points to.
|
||||||
|
# - From the command line:
|
||||||
|
# $ docker pull vaultwarden/web-vault:v2.25.0
|
||||||
|
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.25.0
|
||||||
|
# [vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527]
|
||||||
|
#
|
||||||
|
# - Conversely, to get the tag name from the digest:
|
||||||
|
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527
|
||||||
|
# [vaultwarden/web-vault:v2.25.0]
|
||||||
|
#
|
||||||
|
FROM vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527 as vault
|
||||||
|
|
||||||
|
########################## BUILD IMAGE ##########################
|
||||||
|
FROM rust:1.55-buster as build
|
||||||
|
|
||||||
|
# Debian-based builds support multidb
|
||||||
|
ARG DB=sqlite,mysql,postgresql
|
||||||
|
|
||||||
|
# Build time options to avoid dpkg warnings and help with reproducible builds.
|
||||||
|
ENV DEBIAN_FRONTEND=noninteractive \
|
||||||
|
LANG=C.UTF-8 \
|
||||||
|
TZ=UTC \
|
||||||
|
TERM=xterm-256color \
|
||||||
|
CARGO_HOME="/root/.cargo" \
|
||||||
|
USER="root"
|
||||||
|
|
||||||
|
|
||||||
|
# Create CARGO_HOME folder and don't download rust docs
|
||||||
|
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry mkdir -pv "${CARGO_HOME}" \
|
||||||
|
&& rustup set profile minimal
|
||||||
|
|
||||||
|
# NOTE: Any apt-get/dpkg after this stage will fail because of broken dependencies.
|
||||||
|
# For Diesel-RS migrations_macros to compile with MySQL/MariaDB we need to do some magic.
|
||||||
|
# We at least need libmariadb3:amd64 installed for the x86_64 version of libmariadb.so (client)
|
||||||
|
# We also need the libmariadb-dev-compat:amd64 but it can not be installed together with the :arm64 version.
|
||||||
|
# What we can do is a force install, because nothing important is overlapping each other.
|
||||||
|
#
|
||||||
|
# Install required build libs for arm64 architecture.
|
||||||
|
# To compile both mysql and postgresql we need some extra packages for both host arch and target arch
|
||||||
|
RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > /etc/apt/sources.list.d/deb-src.list \
|
||||||
|
&& dpkg --add-architecture arm64 \
|
||||||
|
&& apt-get update \
|
||||||
|
&& apt-get install -y \
|
||||||
|
--no-install-recommends \
|
||||||
|
libssl-dev:arm64 \
|
||||||
|
libc6-dev:arm64 \
|
||||||
|
libpq5:arm64 \
|
||||||
|
libpq-dev \
|
||||||
|
libmariadb3:amd64 \
|
||||||
|
libmariadb-dev:arm64 \
|
||||||
|
libmariadb-dev-compat:arm64 \
|
||||||
|
gcc-aarch64-linux-gnu \
|
||||||
|
#
|
||||||
|
# Manual install libmariadb-dev-compat:amd64 ( After this broken dependencies will break apt )
|
||||||
|
&& apt-get download libmariadb-dev-compat:amd64 \
|
||||||
|
&& dpkg --force-all -i ./libmariadb-dev-compat*.deb \
|
||||||
|
&& rm -rvf ./libmariadb-dev-compat*.deb \
|
||||||
|
&& apt-get clean \
|
||||||
|
&& rm -rf /var/lib/apt/lists/* \
|
||||||
|
#
|
||||||
|
# For Diesel-RS migrations_macros to compile with PostgreSQL we need to do some magic.
|
||||||
|
# The libpq5:arm64 package seems to not provide a symlink to libpq.so.5 with the name libpq.so.
|
||||||
|
# This is only provided by the libpq-dev package which can't be installed for both arch at the same time.
|
||||||
|
# Without this specific file the ld command will fail and compilation fails with it.
|
||||||
|
&& ln -sfnr /usr/lib/aarch64-linux-gnu/libpq.so.5 /usr/lib/aarch64-linux-gnu/libpq.so \
|
||||||
|
#
|
||||||
|
# Make sure cargo has the right target config
|
||||||
|
&& echo '[target.aarch64-unknown-linux-gnu]' >> "${CARGO_HOME}/config" \
|
||||||
|
&& echo 'linker = "aarch64-linux-gnu-gcc"' >> "${CARGO_HOME}/config" \
|
||||||
|
&& echo 'rustflags = ["-L/usr/lib/aarch64-linux-gnu"]' >> "${CARGO_HOME}/config"
|
||||||
|
|
||||||
|
# Set arm specific environment values
|
||||||
|
ENV CC_aarch64_unknown_linux_gnu="/usr/bin/aarch64-linux-gnu-gcc"
|
||||||
|
ENV CROSS_COMPILE="1"
|
||||||
|
ENV OPENSSL_INCLUDE_DIR="/usr/include/aarch64-linux-gnu"
|
||||||
|
ENV OPENSSL_LIB_DIR="/usr/lib/aarch64-linux-gnu"
|
||||||
|
|
||||||
|
|
||||||
|
# Creates a dummy project used to grab dependencies
|
||||||
|
RUN USER=root cargo new --bin /app
|
||||||
|
WORKDIR /app
|
||||||
|
|
||||||
|
# Copies over *only* your manifests and build files
|
||||||
|
COPY ./Cargo.* ./
|
||||||
|
COPY ./rust-toolchain ./rust-toolchain
|
||||||
|
COPY ./build.rs ./build.rs
|
||||||
|
|
||||||
|
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry rustup target add aarch64-unknown-linux-gnu
|
||||||
|
|
||||||
|
# Builds your dependencies and removes the
|
||||||
|
# dummy project, except the target folder
|
||||||
|
# This folder contains the compiled dependencies
|
||||||
|
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=aarch64-unknown-linux-gnu \
|
||||||
|
&& find . -not -path "./target*" -delete
|
||||||
|
|
||||||
|
# Copies the complete project
|
||||||
|
# To avoid copying unneeded files, use .dockerignore
|
||||||
|
COPY . .
|
||||||
|
|
||||||
|
# Make sure that we actually build the project
|
||||||
|
RUN touch src/main.rs
|
||||||
|
|
||||||
|
# Builds again, this time it'll just be
|
||||||
|
# your actual source files being built
|
||||||
|
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=aarch64-unknown-linux-gnu
|
||||||
|
|
||||||
|
######################## RUNTIME IMAGE ########################
|
||||||
|
# Create a new stage with a minimal image
|
||||||
|
# because we already have a binary built
|
||||||
|
FROM balenalib/aarch64-debian:buster
|
||||||
|
|
||||||
|
ENV ROCKET_ENV "staging"
|
||||||
|
ENV ROCKET_PORT=80
|
||||||
|
ENV ROCKET_WORKERS=10
|
||||||
|
|
||||||
|
# hadolint ignore=DL3059
|
||||||
|
RUN [ "cross-build-start" ]
|
||||||
|
|
||||||
|
# Create data folder and Install needed libraries
|
||||||
|
RUN mkdir /data \
|
||||||
|
&& apt-get update && apt-get install -y \
|
||||||
|
--no-install-recommends \
|
||||||
|
openssl \
|
||||||
|
ca-certificates \
|
||||||
|
curl \
|
||||||
|
dumb-init \
|
||||||
|
libmariadb-dev-compat \
|
||||||
|
libpq5 \
|
||||||
|
&& apt-get clean \
|
||||||
|
&& rm -rf /var/lib/apt/lists/*
|
||||||
|
|
||||||
|
# hadolint ignore=DL3059
|
||||||
|
RUN [ "cross-build-end" ]
|
||||||
|
|
||||||
|
VOLUME /data
|
||||||
|
EXPOSE 80
|
||||||
|
EXPOSE 3012
|
||||||
|
|
||||||
|
# Copies the files from the context (Rocket.toml file and web-vault)
|
||||||
|
# and the binary from the "build" stage to the current stage
|
||||||
|
WORKDIR /
|
||||||
|
COPY Rocket.toml .
|
||||||
|
COPY --from=vault /web-vault ./web-vault
|
||||||
|
COPY --from=build /app/target/aarch64-unknown-linux-gnu/release/vaultwarden .
|
||||||
|
|
||||||
|
COPY docker/healthcheck.sh /healthcheck.sh
|
||||||
|
COPY docker/start.sh /start.sh
|
||||||
|
|
||||||
|
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
|
||||||
|
|
||||||
|
# Configures the startup!
|
||||||
|
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
|
||||||
|
CMD ["/start.sh"]
|
||||||
@@ -1,3 +1,5 @@
|
|||||||
|
# syntax=docker/dockerfile:1
|
||||||
|
|
||||||
# This file was generated using a Jinja2 template.
|
# This file was generated using a Jinja2 template.
|
||||||
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
|
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
|
||||||
|
|
||||||
@@ -14,32 +16,44 @@
|
|||||||
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
|
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
|
||||||
# click the tag name to view the digest of the image it currently points to.
|
# click the tag name to view the digest of the image it currently points to.
|
||||||
# - From the command line:
|
# - From the command line:
|
||||||
# $ docker pull vaultwarden/web-vault:v2.21.1
|
# $ docker pull vaultwarden/web-vault:v2.25.0
|
||||||
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.21.1
|
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.25.0
|
||||||
# [vaultwarden/web-vault@sha256:29a4fa7bf3790fff9d908b02ac5a154913491f4bf30c95b87b06d8cf1c5516b5]
|
# [vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527]
|
||||||
#
|
#
|
||||||
# - Conversely, to get the tag name from the digest:
|
# - Conversely, to get the tag name from the digest:
|
||||||
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:29a4fa7bf3790fff9d908b02ac5a154913491f4bf30c95b87b06d8cf1c5516b5
|
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527
|
||||||
# [vaultwarden/web-vault:v2.21.1]
|
# [vaultwarden/web-vault:v2.25.0]
|
||||||
#
|
#
|
||||||
FROM vaultwarden/web-vault@sha256:29a4fa7bf3790fff9d908b02ac5a154913491f4bf30c95b87b06d8cf1c5516b5 as vault
|
FROM vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527 as vault
|
||||||
|
|
||||||
########################## BUILD IMAGE ##########################
|
########################## BUILD IMAGE ##########################
|
||||||
FROM rust:1.53 as build
|
FROM rust:1.55-buster as build
|
||||||
|
|
||||||
# Debian-based builds support multidb
|
# Debian-based builds support multidb
|
||||||
ARG DB=sqlite,mysql,postgresql
|
ARG DB=sqlite,mysql,postgresql
|
||||||
|
|
||||||
# Build time options to avoid dpkg warnings and help with reproducible builds.
|
# Build time options to avoid dpkg warnings and help with reproducible builds.
|
||||||
ENV DEBIAN_FRONTEND=noninteractive LANG=C.UTF-8 TZ=UTC TERM=xterm-256color
|
ENV DEBIAN_FRONTEND=noninteractive \
|
||||||
|
LANG=C.UTF-8 \
|
||||||
|
TZ=UTC \
|
||||||
|
TERM=xterm-256color \
|
||||||
|
CARGO_HOME="/root/.cargo" \
|
||||||
|
USER="root"
|
||||||
|
|
||||||
# Don't download rust docs
|
|
||||||
RUN rustup set profile minimal
|
|
||||||
|
|
||||||
|
# Create CARGO_HOME folder and don't download rust docs
|
||||||
|
RUN mkdir -pv "${CARGO_HOME}" \
|
||||||
|
&& rustup set profile minimal
|
||||||
|
|
||||||
|
# NOTE: Any apt-get/dpkg after this stage will fail because of broken dependencies.
|
||||||
|
# For Diesel-RS migrations_macros to compile with MySQL/MariaDB we need to do some magic.
|
||||||
|
# We at least need libmariadb3:amd64 installed for the x86_64 version of libmariadb.so (client)
|
||||||
|
# We also need the libmariadb-dev-compat:amd64 but it can not be installed together with the :armel version.
|
||||||
|
# What we can do is a force install, because nothing important is overlapping each other.
|
||||||
|
#
|
||||||
# Install required build libs for armel architecture.
|
# Install required build libs for armel architecture.
|
||||||
# To compile both mysql and postgresql we need some extra packages for both host arch and target arch
|
# To compile both mysql and postgresql we need some extra packages for both host arch and target arch
|
||||||
RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > \
|
RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > /etc/apt/sources.list.d/deb-src.list \
|
||||||
/etc/apt/sources.list.d/deb-src.list \
|
|
||||||
&& dpkg --add-architecture armel \
|
&& dpkg --add-architecture armel \
|
||||||
&& apt-get update \
|
&& apt-get update \
|
||||||
&& apt-get install -y \
|
&& apt-get install -y \
|
||||||
@@ -48,16 +62,35 @@ RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > \
|
|||||||
libc6-dev:armel \
|
libc6-dev:armel \
|
||||||
libpq5:armel \
|
libpq5:armel \
|
||||||
libpq-dev \
|
libpq-dev \
|
||||||
|
libmariadb3:amd64 \
|
||||||
libmariadb-dev:armel \
|
libmariadb-dev:armel \
|
||||||
libmariadb-dev-compat:armel \
|
libmariadb-dev-compat:armel \
|
||||||
gcc-arm-linux-gnueabi \
|
gcc-arm-linux-gnueabi \
|
||||||
&& mkdir -p ~/.cargo \
|
#
|
||||||
&& echo '[target.arm-unknown-linux-gnueabi]' >> ~/.cargo/config \
|
# Manual install libmariadb-dev-compat:amd64 ( After this broken dependencies will break apt )
|
||||||
&& echo 'linker = "arm-linux-gnueabi-gcc"' >> ~/.cargo/config \
|
&& apt-get download libmariadb-dev-compat:amd64 \
|
||||||
&& echo 'rustflags = ["-L/usr/lib/arm-linux-gnueabi"]' >> ~/.cargo/config
|
&& dpkg --force-all -i ./libmariadb-dev-compat*.deb \
|
||||||
|
&& rm -rvf ./libmariadb-dev-compat*.deb \
|
||||||
|
&& apt-get clean \
|
||||||
|
&& rm -rf /var/lib/apt/lists/* \
|
||||||
|
#
|
||||||
|
# For Diesel-RS migrations_macros to compile with PostgreSQL we need to do some magic.
|
||||||
|
# The libpq5:armel package seems to not provide a symlink to libpq.so.5 with the name libpq.so.
|
||||||
|
# This is only provided by the libpq-dev package which can't be installed for both arch at the same time.
|
||||||
|
# Without this specific file the ld command will fail and compilation fails with it.
|
||||||
|
&& ln -sfnr /usr/lib/arm-linux-gnueabi/libpq.so.5 /usr/lib/arm-linux-gnueabi/libpq.so \
|
||||||
|
#
|
||||||
|
# Make sure cargo has the right target config
|
||||||
|
&& echo '[target.arm-unknown-linux-gnueabi]' >> "${CARGO_HOME}/config" \
|
||||||
|
&& echo 'linker = "arm-linux-gnueabi-gcc"' >> "${CARGO_HOME}/config" \
|
||||||
|
&& echo 'rustflags = ["-L/usr/lib/arm-linux-gnueabi"]' >> "${CARGO_HOME}/config"
|
||||||
|
|
||||||
|
# Set arm specific environment values
|
||||||
|
ENV CC_arm_unknown_linux_gnueabi="/usr/bin/arm-linux-gnueabi-gcc"
|
||||||
|
ENV CROSS_COMPILE="1"
|
||||||
|
ENV OPENSSL_INCLUDE_DIR="/usr/include/arm-linux-gnueabi"
|
||||||
|
ENV OPENSSL_LIB_DIR="/usr/lib/arm-linux-gnueabi"
|
||||||
|
|
||||||
ENV CARGO_HOME "/root/.cargo"
|
|
||||||
ENV USER "root"
|
|
||||||
|
|
||||||
# Creates a dummy project used to grab dependencies
|
# Creates a dummy project used to grab dependencies
|
||||||
RUN USER=root cargo new --bin /app
|
RUN USER=root cargo new --bin /app
|
||||||
@@ -68,25 +101,6 @@ COPY ./Cargo.* ./
|
|||||||
COPY ./rust-toolchain ./rust-toolchain
|
COPY ./rust-toolchain ./rust-toolchain
|
||||||
COPY ./build.rs ./build.rs
|
COPY ./build.rs ./build.rs
|
||||||
|
|
||||||
# NOTE: This should be the last apt-get/dpkg for this stage, since after this it will fail because of broken dependencies.
|
|
||||||
# For Diesel-RS migrations_macros to compile with MySQL/MariaDB we need to do some magic.
|
|
||||||
# We at least need libmariadb3:amd64 installed for the x86_64 version of libmariadb.so (client)
|
|
||||||
# We also need the libmariadb-dev-compat:amd64 but it can not be installed together with the :armel version.
|
|
||||||
# What we can do is a force install, because nothing important is overlapping each other.
|
|
||||||
RUN apt-get install -y --no-install-recommends libmariadb3:amd64 \
|
|
||||||
&& apt-get download libmariadb-dev-compat:amd64 \
|
|
||||||
&& dpkg --force-all -i ./libmariadb-dev-compat*.deb \
|
|
||||||
&& rm -rvf ./libmariadb-dev-compat*.deb \
|
|
||||||
# For Diesel-RS migrations_macros to compile with PostgreSQL we need to do some magic.
|
|
||||||
# The libpq5:armel package seems to not provide a symlink to libpq.so.5 with the name libpq.so.
|
|
||||||
# This is only provided by the libpq-dev package which can't be installed for both arch at the same time.
|
|
||||||
# Without this specific file the ld command will fail and compilation fails with it.
|
|
||||||
&& ln -sfnr /usr/lib/arm-linux-gnueabi/libpq.so.5 /usr/lib/arm-linux-gnueabi/libpq.so
|
|
||||||
|
|
||||||
ENV CC_arm_unknown_linux_gnueabi="/usr/bin/arm-linux-gnueabi-gcc"
|
|
||||||
ENV CROSS_COMPILE="1"
|
|
||||||
ENV OPENSSL_INCLUDE_DIR="/usr/include/arm-linux-gnueabi"
|
|
||||||
ENV OPENSSL_LIB_DIR="/usr/lib/arm-linux-gnueabi"
|
|
||||||
RUN rustup target add arm-unknown-linux-gnueabi
|
RUN rustup target add arm-unknown-linux-gnueabi
|
||||||
|
|
||||||
# Builds your dependencies and removes the
|
# Builds your dependencies and removes the
|
||||||
@@ -128,6 +142,7 @@ RUN mkdir /data \
|
|||||||
dumb-init \
|
dumb-init \
|
||||||
libmariadb-dev-compat \
|
libmariadb-dev-compat \
|
||||||
libpq5 \
|
libpq5 \
|
||||||
|
&& apt-get clean \
|
||||||
&& rm -rf /var/lib/apt/lists/*
|
&& rm -rf /var/lib/apt/lists/*
|
||||||
|
|
||||||
# hadolint ignore=DL3059
|
# hadolint ignore=DL3059
|
||||||
|
|||||||
169
docker/armv6/Dockerfile.buildx
Normal file
169
docker/armv6/Dockerfile.buildx
Normal file
@@ -0,0 +1,169 @@
|
|||||||
|
# syntax=docker/dockerfile:1
|
||||||
|
|
||||||
|
# This file was generated using a Jinja2 template.
|
||||||
|
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
|
||||||
|
|
||||||
|
# Using multistage build:
|
||||||
|
# https://docs.docker.com/develop/develop-images/multistage-build/
|
||||||
|
# https://whitfin.io/speeding-up-rust-docker-builds/
|
||||||
|
####################### VAULT BUILD IMAGE #######################
|
||||||
|
# The web-vault digest specifies a particular web-vault build on Docker Hub.
|
||||||
|
# Using the digest instead of the tag name provides better security,
|
||||||
|
# as the digest of an image is immutable, whereas a tag name can later
|
||||||
|
# be changed to point to a malicious image.
|
||||||
|
#
|
||||||
|
# To verify the current digest for a given tag name:
|
||||||
|
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
|
||||||
|
# click the tag name to view the digest of the image it currently points to.
|
||||||
|
# - From the command line:
|
||||||
|
# $ docker pull vaultwarden/web-vault:v2.25.0
|
||||||
|
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.25.0
|
||||||
|
# [vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527]
|
||||||
|
#
|
||||||
|
# - Conversely, to get the tag name from the digest:
|
||||||
|
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527
|
||||||
|
# [vaultwarden/web-vault:v2.25.0]
|
||||||
|
#
|
||||||
|
FROM vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527 as vault
|
||||||
|
|
||||||
|
########################## BUILD IMAGE ##########################
|
||||||
|
FROM rust:1.55-buster as build
|
||||||
|
|
||||||
|
# Debian-based builds support multidb
|
||||||
|
ARG DB=sqlite,mysql,postgresql
|
||||||
|
|
||||||
|
# Build time options to avoid dpkg warnings and help with reproducible builds.
|
||||||
|
ENV DEBIAN_FRONTEND=noninteractive \
|
||||||
|
LANG=C.UTF-8 \
|
||||||
|
TZ=UTC \
|
||||||
|
TERM=xterm-256color \
|
||||||
|
CARGO_HOME="/root/.cargo" \
|
||||||
|
USER="root"
|
||||||
|
|
||||||
|
|
||||||
|
# Create CARGO_HOME folder and don't download rust docs
|
||||||
|
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry mkdir -pv "${CARGO_HOME}" \
|
||||||
|
&& rustup set profile minimal
|
||||||
|
|
||||||
|
# NOTE: Any apt-get/dpkg after this stage will fail because of broken dependencies.
|
||||||
|
# For Diesel-RS migrations_macros to compile with MySQL/MariaDB we need to do some magic.
|
||||||
|
# We at least need libmariadb3:amd64 installed for the x86_64 version of libmariadb.so (client)
|
||||||
|
# We also need the libmariadb-dev-compat:amd64 but it can not be installed together with the :armel version.
|
||||||
|
# What we can do is a force install, because nothing important is overlapping each other.
|
||||||
|
#
|
||||||
|
# Install required build libs for armel architecture.
|
||||||
|
# To compile both mysql and postgresql we need some extra packages for both host arch and target arch
|
||||||
|
RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > /etc/apt/sources.list.d/deb-src.list \
|
||||||
|
&& dpkg --add-architecture armel \
|
||||||
|
&& apt-get update \
|
||||||
|
&& apt-get install -y \
|
||||||
|
--no-install-recommends \
|
||||||
|
libssl-dev:armel \
|
||||||
|
libc6-dev:armel \
|
||||||
|
libpq5:armel \
|
||||||
|
libpq-dev \
|
||||||
|
libmariadb3:amd64 \
|
||||||
|
libmariadb-dev:armel \
|
||||||
|
libmariadb-dev-compat:armel \
|
||||||
|
gcc-arm-linux-gnueabi \
|
||||||
|
#
|
||||||
|
# Manual install libmariadb-dev-compat:amd64 ( After this broken dependencies will break apt )
|
||||||
|
&& apt-get download libmariadb-dev-compat:amd64 \
|
||||||
|
&& dpkg --force-all -i ./libmariadb-dev-compat*.deb \
|
||||||
|
&& rm -rvf ./libmariadb-dev-compat*.deb \
|
||||||
|
&& apt-get clean \
|
||||||
|
&& rm -rf /var/lib/apt/lists/* \
|
||||||
|
#
|
||||||
|
# For Diesel-RS migrations_macros to compile with PostgreSQL we need to do some magic.
|
||||||
|
# The libpq5:armel package seems to not provide a symlink to libpq.so.5 with the name libpq.so.
|
||||||
|
# This is only provided by the libpq-dev package which can't be installed for both arch at the same time.
|
||||||
|
# Without this specific file the ld command will fail and compilation fails with it.
|
||||||
|
&& ln -sfnr /usr/lib/arm-linux-gnueabi/libpq.so.5 /usr/lib/arm-linux-gnueabi/libpq.so \
|
||||||
|
#
|
||||||
|
# Make sure cargo has the right target config
|
||||||
|
&& echo '[target.arm-unknown-linux-gnueabi]' >> "${CARGO_HOME}/config" \
|
||||||
|
&& echo 'linker = "arm-linux-gnueabi-gcc"' >> "${CARGO_HOME}/config" \
|
||||||
|
&& echo 'rustflags = ["-L/usr/lib/arm-linux-gnueabi"]' >> "${CARGO_HOME}/config"
|
||||||
|
|
||||||
|
# Set arm specific environment values
|
||||||
|
ENV CC_arm_unknown_linux_gnueabi="/usr/bin/arm-linux-gnueabi-gcc"
|
||||||
|
ENV CROSS_COMPILE="1"
|
||||||
|
ENV OPENSSL_INCLUDE_DIR="/usr/include/arm-linux-gnueabi"
|
||||||
|
ENV OPENSSL_LIB_DIR="/usr/lib/arm-linux-gnueabi"
|
||||||
|
|
||||||
|
|
||||||
|
# Creates a dummy project used to grab dependencies
|
||||||
|
RUN USER=root cargo new --bin /app
|
||||||
|
WORKDIR /app
|
||||||
|
|
||||||
|
# Copies over *only* your manifests and build files
|
||||||
|
COPY ./Cargo.* ./
|
||||||
|
COPY ./rust-toolchain ./rust-toolchain
|
||||||
|
COPY ./build.rs ./build.rs
|
||||||
|
|
||||||
|
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry rustup target add arm-unknown-linux-gnueabi
|
||||||
|
|
||||||
|
# Builds your dependencies and removes the
|
||||||
|
# dummy project, except the target folder
|
||||||
|
# This folder contains the compiled dependencies
|
||||||
|
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=arm-unknown-linux-gnueabi \
|
||||||
|
&& find . -not -path "./target*" -delete
|
||||||
|
|
||||||
|
# Copies the complete project
|
||||||
|
# To avoid copying unneeded files, use .dockerignore
|
||||||
|
COPY . .
|
||||||
|
|
||||||
|
# Make sure that we actually build the project
|
||||||
|
RUN touch src/main.rs
|
||||||
|
|
||||||
|
# Builds again, this time it'll just be
|
||||||
|
# your actual source files being built
|
||||||
|
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=arm-unknown-linux-gnueabi
|
||||||
|
|
||||||
|
######################## RUNTIME IMAGE ########################
|
||||||
|
# Create a new stage with a minimal image
|
||||||
|
# because we already have a binary built
|
||||||
|
FROM balenalib/rpi-debian:buster
|
||||||
|
|
||||||
|
ENV ROCKET_ENV "staging"
|
||||||
|
ENV ROCKET_PORT=80
|
||||||
|
ENV ROCKET_WORKERS=10
|
||||||
|
|
||||||
|
# hadolint ignore=DL3059
|
||||||
|
RUN [ "cross-build-start" ]
|
||||||
|
|
||||||
|
# Create data folder and Install needed libraries
|
||||||
|
RUN mkdir /data \
|
||||||
|
&& apt-get update && apt-get install -y \
|
||||||
|
--no-install-recommends \
|
||||||
|
openssl \
|
||||||
|
ca-certificates \
|
||||||
|
curl \
|
||||||
|
dumb-init \
|
||||||
|
libmariadb-dev-compat \
|
||||||
|
libpq5 \
|
||||||
|
&& apt-get clean \
|
||||||
|
&& rm -rf /var/lib/apt/lists/*
|
||||||
|
|
||||||
|
# hadolint ignore=DL3059
|
||||||
|
RUN [ "cross-build-end" ]
|
||||||
|
|
||||||
|
VOLUME /data
|
||||||
|
EXPOSE 80
|
||||||
|
EXPOSE 3012
|
||||||
|
|
||||||
|
# Copies the files from the context (Rocket.toml file and web-vault)
|
||||||
|
# and the binary from the "build" stage to the current stage
|
||||||
|
WORKDIR /
|
||||||
|
COPY Rocket.toml .
|
||||||
|
COPY --from=vault /web-vault ./web-vault
|
||||||
|
COPY --from=build /app/target/arm-unknown-linux-gnueabi/release/vaultwarden .
|
||||||
|
|
||||||
|
COPY docker/healthcheck.sh /healthcheck.sh
|
||||||
|
COPY docker/start.sh /start.sh
|
||||||
|
|
||||||
|
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
|
||||||
|
|
||||||
|
# Configures the startup!
|
||||||
|
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
|
||||||
|
CMD ["/start.sh"]
|
||||||
@@ -1,3 +1,5 @@
|
|||||||
|
# syntax=docker/dockerfile:1
|
||||||
|
|
||||||
# This file was generated using a Jinja2 template.
|
# This file was generated using a Jinja2 template.
|
||||||
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
|
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
|
||||||
|
|
||||||
@@ -14,32 +16,44 @@
|
|||||||
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
|
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
|
||||||
# click the tag name to view the digest of the image it currently points to.
|
# click the tag name to view the digest of the image it currently points to.
|
||||||
# - From the command line:
|
# - From the command line:
|
||||||
# $ docker pull vaultwarden/web-vault:v2.21.1
|
# $ docker pull vaultwarden/web-vault:v2.25.0
|
||||||
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.21.1
|
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.25.0
|
||||||
# [vaultwarden/web-vault@sha256:29a4fa7bf3790fff9d908b02ac5a154913491f4bf30c95b87b06d8cf1c5516b5]
|
# [vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527]
|
||||||
#
|
#
|
||||||
# - Conversely, to get the tag name from the digest:
|
# - Conversely, to get the tag name from the digest:
|
||||||
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:29a4fa7bf3790fff9d908b02ac5a154913491f4bf30c95b87b06d8cf1c5516b5
|
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527
|
||||||
# [vaultwarden/web-vault:v2.21.1]
|
# [vaultwarden/web-vault:v2.25.0]
|
||||||
#
|
#
|
||||||
FROM vaultwarden/web-vault@sha256:29a4fa7bf3790fff9d908b02ac5a154913491f4bf30c95b87b06d8cf1c5516b5 as vault
|
FROM vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527 as vault
|
||||||
|
|
||||||
########################## BUILD IMAGE ##########################
|
########################## BUILD IMAGE ##########################
|
||||||
FROM rust:1.53 as build
|
FROM rust:1.55-buster as build
|
||||||
|
|
||||||
# Debian-based builds support multidb
|
# Debian-based builds support multidb
|
||||||
ARG DB=sqlite,mysql,postgresql
|
ARG DB=sqlite,mysql,postgresql
|
||||||
|
|
||||||
# Build time options to avoid dpkg warnings and help with reproducible builds.
|
# Build time options to avoid dpkg warnings and help with reproducible builds.
|
||||||
ENV DEBIAN_FRONTEND=noninteractive LANG=C.UTF-8 TZ=UTC TERM=xterm-256color
|
ENV DEBIAN_FRONTEND=noninteractive \
|
||||||
|
LANG=C.UTF-8 \
|
||||||
|
TZ=UTC \
|
||||||
|
TERM=xterm-256color \
|
||||||
|
CARGO_HOME="/root/.cargo" \
|
||||||
|
USER="root"
|
||||||
|
|
||||||
# Don't download rust docs
|
|
||||||
RUN rustup set profile minimal
|
|
||||||
|
|
||||||
|
# Create CARGO_HOME folder and don't download rust docs
|
||||||
|
RUN mkdir -pv "${CARGO_HOME}" \
|
||||||
|
&& rustup set profile minimal
|
||||||
|
|
||||||
|
# NOTE: Any apt-get/dpkg after this stage will fail because of broken dependencies.
|
||||||
|
# For Diesel-RS migrations_macros to compile with MySQL/MariaDB we need to do some magic.
|
||||||
|
# We at least need libmariadb3:amd64 installed for the x86_64 version of libmariadb.so (client)
|
||||||
|
# We also need the libmariadb-dev-compat:amd64 but it can not be installed together with the :armhf version.
|
||||||
|
# What we can do is a force install, because nothing important is overlapping each other.
|
||||||
|
#
|
||||||
# Install required build libs for armhf architecture.
|
# Install required build libs for armhf architecture.
|
||||||
# To compile both mysql and postgresql we need some extra packages for both host arch and target arch
|
# To compile both mysql and postgresql we need some extra packages for both host arch and target arch
|
||||||
RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > \
|
RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > /etc/apt/sources.list.d/deb-src.list \
|
||||||
/etc/apt/sources.list.d/deb-src.list \
|
|
||||||
&& dpkg --add-architecture armhf \
|
&& dpkg --add-architecture armhf \
|
||||||
&& apt-get update \
|
&& apt-get update \
|
||||||
&& apt-get install -y \
|
&& apt-get install -y \
|
||||||
@@ -48,16 +62,35 @@ RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > \
|
|||||||
libc6-dev:armhf \
|
libc6-dev:armhf \
|
||||||
libpq5:armhf \
|
libpq5:armhf \
|
||||||
libpq-dev \
|
libpq-dev \
|
||||||
|
libmariadb3:amd64 \
|
||||||
libmariadb-dev:armhf \
|
libmariadb-dev:armhf \
|
||||||
libmariadb-dev-compat:armhf \
|
libmariadb-dev-compat:armhf \
|
||||||
gcc-arm-linux-gnueabihf \
|
gcc-arm-linux-gnueabihf \
|
||||||
&& mkdir -p ~/.cargo \
|
#
|
||||||
&& echo '[target.armv7-unknown-linux-gnueabihf]' >> ~/.cargo/config \
|
# Manual install libmariadb-dev-compat:amd64 ( After this broken dependencies will break apt )
|
||||||
&& echo 'linker = "arm-linux-gnueabihf-gcc"' >> ~/.cargo/config \
|
&& apt-get download libmariadb-dev-compat:amd64 \
|
||||||
&& echo 'rustflags = ["-L/usr/lib/arm-linux-gnueabihf"]' >> ~/.cargo/config
|
&& dpkg --force-all -i ./libmariadb-dev-compat*.deb \
|
||||||
|
&& rm -rvf ./libmariadb-dev-compat*.deb \
|
||||||
|
&& apt-get clean \
|
||||||
|
&& rm -rf /var/lib/apt/lists/* \
|
||||||
|
#
|
||||||
|
# For Diesel-RS migrations_macros to compile with PostgreSQL we need to do some magic.
|
||||||
|
# The libpq5:armhf package seems to not provide a symlink to libpq.so.5 with the name libpq.so.
|
||||||
|
# This is only provided by the libpq-dev package which can't be installed for both arch at the same time.
|
||||||
|
# Without this specific file the ld command will fail and compilation fails with it.
|
||||||
|
&& ln -sfnr /usr/lib/arm-linux-gnueabihf/libpq.so.5 /usr/lib/arm-linux-gnueabihf/libpq.so \
|
||||||
|
#
|
||||||
|
# Make sure cargo has the right target config
|
||||||
|
&& echo '[target.armv7-unknown-linux-gnueabihf]' >> "${CARGO_HOME}/config" \
|
||||||
|
&& echo 'linker = "arm-linux-gnueabihf-gcc"' >> "${CARGO_HOME}/config" \
|
||||||
|
&& echo 'rustflags = ["-L/usr/lib/arm-linux-gnueabihf"]' >> "${CARGO_HOME}/config"
|
||||||
|
|
||||||
|
# Set arm specific environment values
|
||||||
|
ENV CC_armv7_unknown_linux_gnueabihf="/usr/bin/arm-linux-gnueabihf-gcc"
|
||||||
|
ENV CROSS_COMPILE="1"
|
||||||
|
ENV OPENSSL_INCLUDE_DIR="/usr/include/arm-linux-gnueabihf"
|
||||||
|
ENV OPENSSL_LIB_DIR="/usr/lib/arm-linux-gnueabihf"
|
||||||
|
|
||||||
ENV CARGO_HOME "/root/.cargo"
|
|
||||||
ENV USER "root"
|
|
||||||
|
|
||||||
# Creates a dummy project used to grab dependencies
|
# Creates a dummy project used to grab dependencies
|
||||||
RUN USER=root cargo new --bin /app
|
RUN USER=root cargo new --bin /app
|
||||||
@@ -68,25 +101,6 @@ COPY ./Cargo.* ./
|
|||||||
COPY ./rust-toolchain ./rust-toolchain
|
COPY ./rust-toolchain ./rust-toolchain
|
||||||
COPY ./build.rs ./build.rs
|
COPY ./build.rs ./build.rs
|
||||||
|
|
||||||
# NOTE: This should be the last apt-get/dpkg for this stage, since after this it will fail because of broken dependencies.
|
|
||||||
# For Diesel-RS migrations_macros to compile with MySQL/MariaDB we need to do some magic.
|
|
||||||
# We at least need libmariadb3:amd64 installed for the x86_64 version of libmariadb.so (client)
|
|
||||||
# We also need the libmariadb-dev-compat:amd64 but it can not be installed together with the :armhf version.
|
|
||||||
# What we can do is a force install, because nothing important is overlapping each other.
|
|
||||||
RUN apt-get install -y --no-install-recommends libmariadb3:amd64 \
|
|
||||||
&& apt-get download libmariadb-dev-compat:amd64 \
|
|
||||||
&& dpkg --force-all -i ./libmariadb-dev-compat*.deb \
|
|
||||||
&& rm -rvf ./libmariadb-dev-compat*.deb \
|
|
||||||
# For Diesel-RS migrations_macros to compile with PostgreSQL we need to do some magic.
|
|
||||||
# The libpq5:armhf package seems to not provide a symlink to libpq.so.5 with the name libpq.so.
|
|
||||||
# This is only provided by the libpq-dev package which can't be installed for both arch at the same time.
|
|
||||||
# Without this specific file the ld command will fail and compilation fails with it.
|
|
||||||
&& ln -sfnr /usr/lib/arm-linux-gnueabihf/libpq.so.5 /usr/lib/arm-linux-gnueabihf/libpq.so
|
|
||||||
|
|
||||||
ENV CC_armv7_unknown_linux_gnueabihf="/usr/bin/arm-linux-gnueabihf-gcc"
|
|
||||||
ENV CROSS_COMPILE="1"
|
|
||||||
ENV OPENSSL_INCLUDE_DIR="/usr/include/arm-linux-gnueabihf"
|
|
||||||
ENV OPENSSL_LIB_DIR="/usr/lib/arm-linux-gnueabihf"
|
|
||||||
RUN rustup target add armv7-unknown-linux-gnueabihf
|
RUN rustup target add armv7-unknown-linux-gnueabihf
|
||||||
|
|
||||||
# Builds your dependencies and removes the
|
# Builds your dependencies and removes the
|
||||||
@@ -128,6 +142,7 @@ RUN mkdir /data \
|
|||||||
dumb-init \
|
dumb-init \
|
||||||
libmariadb-dev-compat \
|
libmariadb-dev-compat \
|
||||||
libpq5 \
|
libpq5 \
|
||||||
|
&& apt-get clean \
|
||||||
&& rm -rf /var/lib/apt/lists/*
|
&& rm -rf /var/lib/apt/lists/*
|
||||||
|
|
||||||
# hadolint ignore=DL3059
|
# hadolint ignore=DL3059
|
||||||
|
|||||||
@@ -1,3 +1,5 @@
|
|||||||
|
# syntax=docker/dockerfile:1
|
||||||
|
|
||||||
# This file was generated using a Jinja2 template.
|
# This file was generated using a Jinja2 template.
|
||||||
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
|
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
|
||||||
|
|
||||||
@@ -14,15 +16,15 @@
|
|||||||
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
|
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
|
||||||
# click the tag name to view the digest of the image it currently points to.
|
# click the tag name to view the digest of the image it currently points to.
|
||||||
# - From the command line:
|
# - From the command line:
|
||||||
# $ docker pull vaultwarden/web-vault:v2.21.1
|
# $ docker pull vaultwarden/web-vault:v2.25.0
|
||||||
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.21.1
|
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.25.0
|
||||||
# [vaultwarden/web-vault@sha256:29a4fa7bf3790fff9d908b02ac5a154913491f4bf30c95b87b06d8cf1c5516b5]
|
# [vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527]
|
||||||
#
|
#
|
||||||
# - Conversely, to get the tag name from the digest:
|
# - Conversely, to get the tag name from the digest:
|
||||||
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:29a4fa7bf3790fff9d908b02ac5a154913491f4bf30c95b87b06d8cf1c5516b5
|
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527
|
||||||
# [vaultwarden/web-vault:v2.21.1]
|
# [vaultwarden/web-vault:v2.25.0]
|
||||||
#
|
#
|
||||||
FROM vaultwarden/web-vault@sha256:29a4fa7bf3790fff9d908b02ac5a154913491f4bf30c95b87b06d8cf1c5516b5 as vault
|
FROM vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527 as vault
|
||||||
|
|
||||||
########################## BUILD IMAGE ##########################
|
########################## BUILD IMAGE ##########################
|
||||||
FROM messense/rust-musl-cross:armv7-musleabihf as build
|
FROM messense/rust-musl-cross:armv7-musleabihf as build
|
||||||
@@ -32,12 +34,18 @@ FROM messense/rust-musl-cross:armv7-musleabihf as build
|
|||||||
ARG DB=sqlite,vendored_openssl
|
ARG DB=sqlite,vendored_openssl
|
||||||
|
|
||||||
# Build time options to avoid dpkg warnings and help with reproducible builds.
|
# Build time options to avoid dpkg warnings and help with reproducible builds.
|
||||||
ENV DEBIAN_FRONTEND=noninteractive LANG=C.UTF-8 TZ=UTC TERM=xterm-256color
|
ENV DEBIAN_FRONTEND=noninteractive \
|
||||||
|
LANG=C.UTF-8 \
|
||||||
|
TZ=UTC \
|
||||||
|
TERM=xterm-256color \
|
||||||
|
CARGO_HOME="/root/.cargo" \
|
||||||
|
USER="root"
|
||||||
|
|
||||||
# Don't download rust docs
|
|
||||||
RUN rustup set profile minimal
|
|
||||||
|
|
||||||
ENV USER "root"
|
# Create CARGO_HOME folder and don't download rust docs
|
||||||
|
RUN mkdir -pv "${CARGO_HOME}" \
|
||||||
|
&& rustup set profile minimal
|
||||||
|
|
||||||
ENV RUSTFLAGS='-C link-arg=-s'
|
ENV RUSTFLAGS='-C link-arg=-s'
|
||||||
ENV CFLAGS_armv7_unknown_linux_musleabihf="-mfpu=vfpv3-d16"
|
ENV CFLAGS_armv7_unknown_linux_musleabihf="-mfpu=vfpv3-d16"
|
||||||
|
|
||||||
@@ -88,6 +96,7 @@ RUN [ "cross-build-start" ]
|
|||||||
RUN mkdir /data \
|
RUN mkdir /data \
|
||||||
&& apk add --no-cache \
|
&& apk add --no-cache \
|
||||||
openssl \
|
openssl \
|
||||||
|
tzdata \
|
||||||
curl \
|
curl \
|
||||||
dumb-init \
|
dumb-init \
|
||||||
ca-certificates
|
ca-certificates
|
||||||
|
|||||||
169
docker/armv7/Dockerfile.buildx
Normal file
169
docker/armv7/Dockerfile.buildx
Normal file
@@ -0,0 +1,169 @@
|
|||||||
|
# syntax=docker/dockerfile:1
|
||||||
|
|
||||||
|
# This file was generated using a Jinja2 template.
|
||||||
|
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
|
||||||
|
|
||||||
|
# Using multistage build:
|
||||||
|
# https://docs.docker.com/develop/develop-images/multistage-build/
|
||||||
|
# https://whitfin.io/speeding-up-rust-docker-builds/
|
||||||
|
####################### VAULT BUILD IMAGE #######################
|
||||||
|
# The web-vault digest specifies a particular web-vault build on Docker Hub.
|
||||||
|
# Using the digest instead of the tag name provides better security,
|
||||||
|
# as the digest of an image is immutable, whereas a tag name can later
|
||||||
|
# be changed to point to a malicious image.
|
||||||
|
#
|
||||||
|
# To verify the current digest for a given tag name:
|
||||||
|
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
|
||||||
|
# click the tag name to view the digest of the image it currently points to.
|
||||||
|
# - From the command line:
|
||||||
|
# $ docker pull vaultwarden/web-vault:v2.25.0
|
||||||
|
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.25.0
|
||||||
|
# [vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527]
|
||||||
|
#
|
||||||
|
# - Conversely, to get the tag name from the digest:
|
||||||
|
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527
|
||||||
|
# [vaultwarden/web-vault:v2.25.0]
|
||||||
|
#
|
||||||
|
FROM vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527 as vault
|
||||||
|
|
||||||
|
########################## BUILD IMAGE ##########################
|
||||||
|
FROM rust:1.55-buster as build
|
||||||
|
|
||||||
|
# Debian-based builds support multidb
|
||||||
|
ARG DB=sqlite,mysql,postgresql
|
||||||
|
|
||||||
|
# Build time options to avoid dpkg warnings and help with reproducible builds.
|
||||||
|
ENV DEBIAN_FRONTEND=noninteractive \
|
||||||
|
LANG=C.UTF-8 \
|
||||||
|
TZ=UTC \
|
||||||
|
TERM=xterm-256color \
|
||||||
|
CARGO_HOME="/root/.cargo" \
|
||||||
|
USER="root"
|
||||||
|
|
||||||
|
|
||||||
|
# Create CARGO_HOME folder and don't download rust docs
|
||||||
|
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry mkdir -pv "${CARGO_HOME}" \
|
||||||
|
&& rustup set profile minimal
|
||||||
|
|
||||||
|
# NOTE: Any apt-get/dpkg after this stage will fail because of broken dependencies.
|
||||||
|
# For Diesel-RS migrations_macros to compile with MySQL/MariaDB we need to do some magic.
|
||||||
|
# We at least need libmariadb3:amd64 installed for the x86_64 version of libmariadb.so (client)
|
||||||
|
# We also need the libmariadb-dev-compat:amd64 but it can not be installed together with the :armhf version.
|
||||||
|
# What we can do is a force install, because nothing important is overlapping each other.
|
||||||
|
#
|
||||||
|
# Install required build libs for armhf architecture.
|
||||||
|
# To compile both mysql and postgresql we need some extra packages for both host arch and target arch
|
||||||
|
RUN sed 's/^deb/deb-src/' /etc/apt/sources.list > /etc/apt/sources.list.d/deb-src.list \
|
||||||
|
&& dpkg --add-architecture armhf \
|
||||||
|
&& apt-get update \
|
||||||
|
&& apt-get install -y \
|
||||||
|
--no-install-recommends \
|
||||||
|
libssl-dev:armhf \
|
||||||
|
libc6-dev:armhf \
|
||||||
|
libpq5:armhf \
|
||||||
|
libpq-dev \
|
||||||
|
libmariadb3:amd64 \
|
||||||
|
libmariadb-dev:armhf \
|
||||||
|
libmariadb-dev-compat:armhf \
|
||||||
|
gcc-arm-linux-gnueabihf \
|
||||||
|
#
|
||||||
|
# Manual install libmariadb-dev-compat:amd64 ( After this broken dependencies will break apt )
|
||||||
|
&& apt-get download libmariadb-dev-compat:amd64 \
|
||||||
|
&& dpkg --force-all -i ./libmariadb-dev-compat*.deb \
|
||||||
|
&& rm -rvf ./libmariadb-dev-compat*.deb \
|
||||||
|
&& apt-get clean \
|
||||||
|
&& rm -rf /var/lib/apt/lists/* \
|
||||||
|
#
|
||||||
|
# For Diesel-RS migrations_macros to compile with PostgreSQL we need to do some magic.
|
||||||
|
# The libpq5:armhf package seems to not provide a symlink to libpq.so.5 with the name libpq.so.
|
||||||
|
# This is only provided by the libpq-dev package which can't be installed for both arch at the same time.
|
||||||
|
# Without this specific file the ld command will fail and compilation fails with it.
|
||||||
|
&& ln -sfnr /usr/lib/arm-linux-gnueabihf/libpq.so.5 /usr/lib/arm-linux-gnueabihf/libpq.so \
|
||||||
|
#
|
||||||
|
# Make sure cargo has the right target config
|
||||||
|
&& echo '[target.armv7-unknown-linux-gnueabihf]' >> "${CARGO_HOME}/config" \
|
||||||
|
&& echo 'linker = "arm-linux-gnueabihf-gcc"' >> "${CARGO_HOME}/config" \
|
||||||
|
&& echo 'rustflags = ["-L/usr/lib/arm-linux-gnueabihf"]' >> "${CARGO_HOME}/config"
|
||||||
|
|
||||||
|
# Set arm specific environment values
|
||||||
|
ENV CC_armv7_unknown_linux_gnueabihf="/usr/bin/arm-linux-gnueabihf-gcc"
|
||||||
|
ENV CROSS_COMPILE="1"
|
||||||
|
ENV OPENSSL_INCLUDE_DIR="/usr/include/arm-linux-gnueabihf"
|
||||||
|
ENV OPENSSL_LIB_DIR="/usr/lib/arm-linux-gnueabihf"
|
||||||
|
|
||||||
|
|
||||||
|
# Creates a dummy project used to grab dependencies
|
||||||
|
RUN USER=root cargo new --bin /app
|
||||||
|
WORKDIR /app
|
||||||
|
|
||||||
|
# Copies over *only* your manifests and build files
|
||||||
|
COPY ./Cargo.* ./
|
||||||
|
COPY ./rust-toolchain ./rust-toolchain
|
||||||
|
COPY ./build.rs ./build.rs
|
||||||
|
|
||||||
|
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry rustup target add armv7-unknown-linux-gnueabihf
|
||||||
|
|
||||||
|
# Builds your dependencies and removes the
|
||||||
|
# dummy project, except the target folder
|
||||||
|
# This folder contains the compiled dependencies
|
||||||
|
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=armv7-unknown-linux-gnueabihf \
|
||||||
|
&& find . -not -path "./target*" -delete
|
||||||
|
|
||||||
|
# Copies the complete project
|
||||||
|
# To avoid copying unneeded files, use .dockerignore
|
||||||
|
COPY . .
|
||||||
|
|
||||||
|
# Make sure that we actually build the project
|
||||||
|
RUN touch src/main.rs
|
||||||
|
|
||||||
|
# Builds again, this time it'll just be
|
||||||
|
# your actual source files being built
|
||||||
|
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=armv7-unknown-linux-gnueabihf
|
||||||
|
|
||||||
|
######################## RUNTIME IMAGE ########################
|
||||||
|
# Create a new stage with a minimal image
|
||||||
|
# because we already have a binary built
|
||||||
|
FROM balenalib/armv7hf-debian:buster
|
||||||
|
|
||||||
|
ENV ROCKET_ENV "staging"
|
||||||
|
ENV ROCKET_PORT=80
|
||||||
|
ENV ROCKET_WORKERS=10
|
||||||
|
|
||||||
|
# hadolint ignore=DL3059
|
||||||
|
RUN [ "cross-build-start" ]
|
||||||
|
|
||||||
|
# Create data folder and Install needed libraries
|
||||||
|
RUN mkdir /data \
|
||||||
|
&& apt-get update && apt-get install -y \
|
||||||
|
--no-install-recommends \
|
||||||
|
openssl \
|
||||||
|
ca-certificates \
|
||||||
|
curl \
|
||||||
|
dumb-init \
|
||||||
|
libmariadb-dev-compat \
|
||||||
|
libpq5 \
|
||||||
|
&& apt-get clean \
|
||||||
|
&& rm -rf /var/lib/apt/lists/*
|
||||||
|
|
||||||
|
# hadolint ignore=DL3059
|
||||||
|
RUN [ "cross-build-end" ]
|
||||||
|
|
||||||
|
VOLUME /data
|
||||||
|
EXPOSE 80
|
||||||
|
EXPOSE 3012
|
||||||
|
|
||||||
|
# Copies the files from the context (Rocket.toml file and web-vault)
|
||||||
|
# and the binary from the "build" stage to the current stage
|
||||||
|
WORKDIR /
|
||||||
|
COPY Rocket.toml .
|
||||||
|
COPY --from=vault /web-vault ./web-vault
|
||||||
|
COPY --from=build /app/target/armv7-unknown-linux-gnueabihf/release/vaultwarden .
|
||||||
|
|
||||||
|
COPY docker/healthcheck.sh /healthcheck.sh
|
||||||
|
COPY docker/start.sh /start.sh
|
||||||
|
|
||||||
|
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
|
||||||
|
|
||||||
|
# Configures the startup!
|
||||||
|
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
|
||||||
|
CMD ["/start.sh"]
|
||||||
125
docker/armv7/Dockerfile.buildx.alpine
Normal file
125
docker/armv7/Dockerfile.buildx.alpine
Normal file
@@ -0,0 +1,125 @@
|
|||||||
|
# syntax=docker/dockerfile:1
|
||||||
|
|
||||||
|
# This file was generated using a Jinja2 template.
|
||||||
|
# Please make your changes in `Dockerfile.j2` and then `make` the individual Dockerfiles.
|
||||||
|
|
||||||
|
# Using multistage build:
|
||||||
|
# https://docs.docker.com/develop/develop-images/multistage-build/
|
||||||
|
# https://whitfin.io/speeding-up-rust-docker-builds/
|
||||||
|
####################### VAULT BUILD IMAGE #######################
|
||||||
|
# The web-vault digest specifies a particular web-vault build on Docker Hub.
|
||||||
|
# Using the digest instead of the tag name provides better security,
|
||||||
|
# as the digest of an image is immutable, whereas a tag name can later
|
||||||
|
# be changed to point to a malicious image.
|
||||||
|
#
|
||||||
|
# To verify the current digest for a given tag name:
|
||||||
|
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
|
||||||
|
# click the tag name to view the digest of the image it currently points to.
|
||||||
|
# - From the command line:
|
||||||
|
# $ docker pull vaultwarden/web-vault:v2.25.0
|
||||||
|
# $ docker image inspect --format "{{.RepoDigests}}" vaultwarden/web-vault:v2.25.0
|
||||||
|
# [vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527]
|
||||||
|
#
|
||||||
|
# - Conversely, to get the tag name from the digest:
|
||||||
|
# $ docker image inspect --format "{{.RepoTags}}" vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527
|
||||||
|
# [vaultwarden/web-vault:v2.25.0]
|
||||||
|
#
|
||||||
|
FROM vaultwarden/web-vault@sha256:0df389deac9e83c739a1f4ff595f12f493b6c27cb4a22bb8fcaba9dc49b9b527 as vault
|
||||||
|
|
||||||
|
########################## BUILD IMAGE ##########################
|
||||||
|
FROM messense/rust-musl-cross:armv7-musleabihf as build
|
||||||
|
|
||||||
|
# Alpine-based ARM (musl) only supports sqlite during compile time.
|
||||||
|
# We now also need to add vendored_openssl, because the current base image we use to build has OpenSSL removed.
|
||||||
|
ARG DB=sqlite,vendored_openssl
|
||||||
|
|
||||||
|
# Build time options to avoid dpkg warnings and help with reproducible builds.
|
||||||
|
ENV DEBIAN_FRONTEND=noninteractive \
|
||||||
|
LANG=C.UTF-8 \
|
||||||
|
TZ=UTC \
|
||||||
|
TERM=xterm-256color \
|
||||||
|
CARGO_HOME="/root/.cargo" \
|
||||||
|
USER="root"
|
||||||
|
|
||||||
|
|
||||||
|
# Create CARGO_HOME folder and don't download rust docs
|
||||||
|
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry mkdir -pv "${CARGO_HOME}" \
|
||||||
|
&& rustup set profile minimal
|
||||||
|
|
||||||
|
ENV RUSTFLAGS='-C link-arg=-s'
|
||||||
|
ENV CFLAGS_armv7_unknown_linux_musleabihf="-mfpu=vfpv3-d16"
|
||||||
|
|
||||||
|
# Creates a dummy project used to grab dependencies
|
||||||
|
RUN USER=root cargo new --bin /app
|
||||||
|
WORKDIR /app
|
||||||
|
|
||||||
|
# Copies over *only* your manifests and build files
|
||||||
|
COPY ./Cargo.* ./
|
||||||
|
COPY ./rust-toolchain ./rust-toolchain
|
||||||
|
COPY ./build.rs ./build.rs
|
||||||
|
|
||||||
|
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry rustup target add armv7-unknown-linux-musleabihf
|
||||||
|
|
||||||
|
# Builds your dependencies and removes the
|
||||||
|
# dummy project, except the target folder
|
||||||
|
# This folder contains the compiled dependencies
|
||||||
|
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=armv7-unknown-linux-musleabihf \
|
||||||
|
&& find . -not -path "./target*" -delete
|
||||||
|
|
||||||
|
# Copies the complete project
|
||||||
|
# To avoid copying unneeded files, use .dockerignore
|
||||||
|
COPY . .
|
||||||
|
|
||||||
|
# Make sure that we actually build the project
|
||||||
|
RUN touch src/main.rs
|
||||||
|
|
||||||
|
# Builds again, this time it'll just be
|
||||||
|
# your actual source files being built
|
||||||
|
RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=armv7-unknown-linux-musleabihf
|
||||||
|
# hadolint ignore=DL3059
|
||||||
|
RUN musl-strip target/armv7-unknown-linux-musleabihf/release/vaultwarden
|
||||||
|
|
||||||
|
######################## RUNTIME IMAGE ########################
|
||||||
|
# Create a new stage with a minimal image
|
||||||
|
# because we already have a binary built
|
||||||
|
FROM balenalib/armv7hf-alpine:3.14
|
||||||
|
|
||||||
|
ENV ROCKET_ENV "staging"
|
||||||
|
ENV ROCKET_PORT=80
|
||||||
|
ENV ROCKET_WORKERS=10
|
||||||
|
ENV SSL_CERT_DIR=/etc/ssl/certs
|
||||||
|
|
||||||
|
# hadolint ignore=DL3059
|
||||||
|
RUN [ "cross-build-start" ]
|
||||||
|
|
||||||
|
# Create data folder and Install needed libraries
|
||||||
|
RUN mkdir /data \
|
||||||
|
&& apk add --no-cache \
|
||||||
|
openssl \
|
||||||
|
tzdata \
|
||||||
|
curl \
|
||||||
|
dumb-init \
|
||||||
|
ca-certificates
|
||||||
|
|
||||||
|
# hadolint ignore=DL3059
|
||||||
|
RUN [ "cross-build-end" ]
|
||||||
|
|
||||||
|
VOLUME /data
|
||||||
|
EXPOSE 80
|
||||||
|
EXPOSE 3012
|
||||||
|
|
||||||
|
# Copies the files from the context (Rocket.toml file and web-vault)
|
||||||
|
# and the binary from the "build" stage to the current stage
|
||||||
|
WORKDIR /
|
||||||
|
COPY Rocket.toml .
|
||||||
|
COPY --from=vault /web-vault ./web-vault
|
||||||
|
COPY --from=build /app/target/armv7-unknown-linux-musleabihf/release/vaultwarden .
|
||||||
|
|
||||||
|
COPY docker/healthcheck.sh /healthcheck.sh
|
||||||
|
COPY docker/start.sh /start.sh
|
||||||
|
|
||||||
|
HEALTHCHECK --interval=60s --timeout=10s CMD ["/healthcheck.sh"]
|
||||||
|
|
||||||
|
# Configures the startup!
|
||||||
|
ENTRYPOINT ["/usr/bin/dumb-init", "--"]
|
||||||
|
CMD ["/start.sh"]
|
||||||
@@ -34,12 +34,17 @@ for label in "${LABELS[@]}"; do
|
|||||||
LABEL_ARGS+=(--label "${label}")
|
LABEL_ARGS+=(--label "${label}")
|
||||||
done
|
done
|
||||||
|
|
||||||
|
# Check if DOCKER_BUILDKIT is set, if so, use the Dockerfile.buildx as template
|
||||||
|
if [[ -n "${DOCKER_BUILDKIT}" ]]; then
|
||||||
|
buildx_suffix=.buildx
|
||||||
|
fi
|
||||||
|
|
||||||
set -ex
|
set -ex
|
||||||
|
|
||||||
for arch in "${arches[@]}"; do
|
for arch in "${arches[@]}"; do
|
||||||
docker build \
|
docker build \
|
||||||
"${LABEL_ARGS[@]}" \
|
"${LABEL_ARGS[@]}" \
|
||||||
-t "${DOCKER_REPO}:${DOCKER_TAG}-${arch}" \
|
-t "${DOCKER_REPO}:${DOCKER_TAG}-${arch}" \
|
||||||
-f docker/${arch}/Dockerfile${distro_suffix} \
|
-f docker/${arch}/Dockerfile${buildx_suffix}${distro_suffix} \
|
||||||
.
|
.
|
||||||
done
|
done
|
||||||
|
|||||||
15
hooks/push
15
hooks/push
@@ -10,7 +10,7 @@ join() { local IFS="$1"; shift; echo "$*"; }
|
|||||||
|
|
||||||
set -ex
|
set -ex
|
||||||
|
|
||||||
echo ">>> Starting local Docker registry..."
|
echo ">>> Starting local Docker registry when needed..."
|
||||||
|
|
||||||
# Docker Buildx's `docker-container` driver is needed for multi-platform
|
# Docker Buildx's `docker-container` driver is needed for multi-platform
|
||||||
# builds, but it can't access existing images on the Docker host (like the
|
# builds, but it can't access existing images on the Docker host (like the
|
||||||
@@ -25,7 +25,13 @@ echo ">>> Starting local Docker registry..."
|
|||||||
# Use host networking so the buildx container can access the registry via
|
# Use host networking so the buildx container can access the registry via
|
||||||
# localhost.
|
# localhost.
|
||||||
#
|
#
|
||||||
docker run -d --name registry --network host registry:2 # defaults to port 5000
|
# First check if there already is a registry container running, else skip it.
|
||||||
|
# This will only happen either locally or running it via Github Actions
|
||||||
|
#
|
||||||
|
if ! timeout 5 bash -c 'cat < /dev/null > /dev/tcp/localhost/5000'; then
|
||||||
|
# defaults to port 5000
|
||||||
|
docker run -d --name registry --network host registry:2
|
||||||
|
fi
|
||||||
|
|
||||||
# Docker Hub sets a `DOCKER_REPO` env var with the format `index.docker.io/user/repo`.
|
# Docker Hub sets a `DOCKER_REPO` env var with the format `index.docker.io/user/repo`.
|
||||||
# Strip the registry portion to construct a local repo path for use in `Dockerfile.buildx`.
|
# Strip the registry portion to construct a local repo path for use in `Dockerfile.buildx`.
|
||||||
@@ -49,7 +55,12 @@ echo ">>> Setting up Docker Buildx..."
|
|||||||
#
|
#
|
||||||
# Ref: https://github.com/docker/buildx/issues/94#issuecomment-534367714
|
# Ref: https://github.com/docker/buildx/issues/94#issuecomment-534367714
|
||||||
#
|
#
|
||||||
|
# Check if there already is a builder running, else skip this and use the existing.
|
||||||
|
# This will only happen either locally or running it via Github Actions
|
||||||
|
#
|
||||||
|
if ! docker buildx inspect builder > /dev/null 2>&1 ; then
|
||||||
docker buildx create --name builder --use --driver-opt network=host
|
docker buildx create --name builder --use --driver-opt network=host
|
||||||
|
fi
|
||||||
|
|
||||||
echo ">>> Running Docker Buildx..."
|
echo ">>> Running Docker Buildx..."
|
||||||
|
|
||||||
|
|||||||
@@ -0,0 +1 @@
|
|||||||
|
DROP TABLE emergency_access;
|
||||||
@@ -0,0 +1,14 @@
|
|||||||
|
CREATE TABLE emergency_access (
|
||||||
|
uuid CHAR(36) NOT NULL PRIMARY KEY,
|
||||||
|
grantor_uuid CHAR(36) REFERENCES users (uuid),
|
||||||
|
grantee_uuid CHAR(36) REFERENCES users (uuid),
|
||||||
|
email VARCHAR(255),
|
||||||
|
key_encrypted TEXT,
|
||||||
|
atype INTEGER NOT NULL,
|
||||||
|
status INTEGER NOT NULL,
|
||||||
|
wait_time_days INTEGER NOT NULL,
|
||||||
|
recovery_initiated_at DATETIME,
|
||||||
|
last_notification_at DATETIME,
|
||||||
|
updated_at DATETIME NOT NULL,
|
||||||
|
created_at DATETIME NOT NULL
|
||||||
|
);
|
||||||
@@ -0,0 +1 @@
|
|||||||
|
DROP TABLE twofactor_incomplete;
|
||||||
@@ -0,0 +1,9 @@
|
|||||||
|
CREATE TABLE twofactor_incomplete (
|
||||||
|
user_uuid CHAR(36) NOT NULL REFERENCES users(uuid),
|
||||||
|
device_uuid CHAR(36) NOT NULL,
|
||||||
|
device_name TEXT NOT NULL,
|
||||||
|
login_time DATETIME NOT NULL,
|
||||||
|
ip_address TEXT NOT NULL,
|
||||||
|
|
||||||
|
PRIMARY KEY (user_uuid, device_uuid)
|
||||||
|
);
|
||||||
@@ -0,0 +1 @@
|
|||||||
|
DROP TABLE emergency_access;
|
||||||
@@ -0,0 +1,14 @@
|
|||||||
|
CREATE TABLE emergency_access (
|
||||||
|
uuid CHAR(36) NOT NULL PRIMARY KEY,
|
||||||
|
grantor_uuid CHAR(36) REFERENCES users (uuid),
|
||||||
|
grantee_uuid CHAR(36) REFERENCES users (uuid),
|
||||||
|
email VARCHAR(255),
|
||||||
|
key_encrypted TEXT,
|
||||||
|
atype INTEGER NOT NULL,
|
||||||
|
status INTEGER NOT NULL,
|
||||||
|
wait_time_days INTEGER NOT NULL,
|
||||||
|
recovery_initiated_at TIMESTAMP,
|
||||||
|
last_notification_at TIMESTAMP,
|
||||||
|
updated_at TIMESTAMP NOT NULL,
|
||||||
|
created_at TIMESTAMP NOT NULL
|
||||||
|
);
|
||||||
@@ -0,0 +1 @@
|
|||||||
|
DROP TABLE twofactor_incomplete;
|
||||||
@@ -0,0 +1,9 @@
|
|||||||
|
CREATE TABLE twofactor_incomplete (
|
||||||
|
user_uuid VARCHAR(40) NOT NULL REFERENCES users(uuid),
|
||||||
|
device_uuid VARCHAR(40) NOT NULL,
|
||||||
|
device_name TEXT NOT NULL,
|
||||||
|
login_time TIMESTAMP NOT NULL,
|
||||||
|
ip_address TEXT NOT NULL,
|
||||||
|
|
||||||
|
PRIMARY KEY (user_uuid, device_uuid)
|
||||||
|
);
|
||||||
@@ -0,0 +1 @@
|
|||||||
|
DROP TABLE emergency_access;
|
||||||
@@ -0,0 +1,14 @@
|
|||||||
|
CREATE TABLE emergency_access (
|
||||||
|
uuid TEXT NOT NULL PRIMARY KEY,
|
||||||
|
grantor_uuid TEXT REFERENCES users (uuid),
|
||||||
|
grantee_uuid TEXT REFERENCES users (uuid),
|
||||||
|
email TEXT,
|
||||||
|
key_encrypted TEXT,
|
||||||
|
atype INTEGER NOT NULL,
|
||||||
|
status INTEGER NOT NULL,
|
||||||
|
wait_time_days INTEGER NOT NULL,
|
||||||
|
recovery_initiated_at DATETIME,
|
||||||
|
last_notification_at DATETIME,
|
||||||
|
updated_at DATETIME NOT NULL,
|
||||||
|
created_at DATETIME NOT NULL
|
||||||
|
);
|
||||||
@@ -0,0 +1 @@
|
|||||||
|
DROP TABLE twofactor_incomplete;
|
||||||
@@ -0,0 +1,9 @@
|
|||||||
|
CREATE TABLE twofactor_incomplete (
|
||||||
|
user_uuid TEXT NOT NULL REFERENCES users(uuid),
|
||||||
|
device_uuid TEXT NOT NULL,
|
||||||
|
device_name TEXT NOT NULL,
|
||||||
|
login_time DATETIME NOT NULL,
|
||||||
|
ip_address TEXT NOT NULL,
|
||||||
|
|
||||||
|
PRIMARY KEY (user_uuid, device_uuid)
|
||||||
|
);
|
||||||
@@ -1 +1 @@
|
|||||||
nightly-2021-06-24
|
nightly-2021-11-05
|
||||||
|
|||||||
@@ -1,7 +1,7 @@
|
|||||||
use once_cell::sync::Lazy;
|
use once_cell::sync::Lazy;
|
||||||
use serde::de::DeserializeOwned;
|
use serde::de::DeserializeOwned;
|
||||||
use serde_json::Value;
|
use serde_json::Value;
|
||||||
use std::{env, time::Duration};
|
use std::env;
|
||||||
|
|
||||||
use rocket::{
|
use rocket::{
|
||||||
http::{Cookie, Cookies, SameSite, Status},
|
http::{Cookie, Cookies, SameSite, Status},
|
||||||
@@ -18,7 +18,9 @@ use crate::{
|
|||||||
db::{backup_database, get_sql_server_version, models::*, DbConn, DbConnType},
|
db::{backup_database, get_sql_server_version, models::*, DbConn, DbConnType},
|
||||||
error::{Error, MapResult},
|
error::{Error, MapResult},
|
||||||
mail,
|
mail,
|
||||||
util::{format_naive_datetime_local, get_display_size, get_reqwest_client, is_running_in_docker},
|
util::{
|
||||||
|
docker_base_image, format_naive_datetime_local, get_display_size, get_reqwest_client, is_running_in_docker,
|
||||||
|
},
|
||||||
CONFIG,
|
CONFIG,
|
||||||
};
|
};
|
||||||
|
|
||||||
@@ -234,7 +236,7 @@ impl AdminTemplateData {
|
|||||||
}
|
}
|
||||||
|
|
||||||
#[get("/", rank = 1)]
|
#[get("/", rank = 1)]
|
||||||
fn admin_page(_token: AdminToken, _conn: DbConn) -> ApiResult<Html<String>> {
|
fn admin_page(_token: AdminToken) -> ApiResult<Html<String>> {
|
||||||
let text = AdminTemplateData::new().render()?;
|
let text = AdminTemplateData::new().render()?;
|
||||||
Ok(Html(text))
|
Ok(Html(text))
|
||||||
}
|
}
|
||||||
@@ -269,7 +271,7 @@ fn invite_user(data: Json<InviteData>, _token: AdminToken, conn: DbConn) -> Json
|
|||||||
if CONFIG.mail_enabled() {
|
if CONFIG.mail_enabled() {
|
||||||
mail::send_invite(&user.email, &user.uuid, None, None, &CONFIG.invitation_org_name(), None)?;
|
mail::send_invite(&user.email, &user.uuid, None, None, &CONFIG.invitation_org_name(), None)?;
|
||||||
} else {
|
} else {
|
||||||
let invitation = Invitation::new(data.email);
|
let invitation = Invitation::new(user.email.clone());
|
||||||
invitation.save(&conn)?;
|
invitation.save(&conn)?;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -460,13 +462,13 @@ struct GitCommit {
|
|||||||
fn get_github_api<T: DeserializeOwned>(url: &str) -> Result<T, Error> {
|
fn get_github_api<T: DeserializeOwned>(url: &str) -> Result<T, Error> {
|
||||||
let github_api = get_reqwest_client();
|
let github_api = get_reqwest_client();
|
||||||
|
|
||||||
Ok(github_api.get(url).timeout(Duration::from_secs(10)).send()?.error_for_status()?.json::<T>()?)
|
Ok(github_api.get(url).send()?.error_for_status()?.json::<T>()?)
|
||||||
}
|
}
|
||||||
|
|
||||||
fn has_http_access() -> bool {
|
fn has_http_access() -> bool {
|
||||||
let http_access = get_reqwest_client();
|
let http_access = get_reqwest_client();
|
||||||
|
|
||||||
match http_access.head("https://github.com/dani-garcia/vaultwarden").timeout(Duration::from_secs(10)).send() {
|
match http_access.head("https://github.com/dani-garcia/vaultwarden").send() {
|
||||||
Ok(r) => r.status().is_success(),
|
Ok(r) => r.status().is_success(),
|
||||||
_ => false,
|
_ => false,
|
||||||
}
|
}
|
||||||
@@ -549,6 +551,7 @@ fn diagnostics(_token: AdminToken, ip_header: IpHeader, conn: DbConn) -> ApiResu
|
|||||||
"web_vault_version": web_vault_version.version,
|
"web_vault_version": web_vault_version.version,
|
||||||
"latest_web_build": latest_web_build,
|
"latest_web_build": latest_web_build,
|
||||||
"running_within_docker": running_within_docker,
|
"running_within_docker": running_within_docker,
|
||||||
|
"docker_base_image": docker_base_image(),
|
||||||
"has_http_access": has_http_access,
|
"has_http_access": has_http_access,
|
||||||
"ip_header_exists": &ip_header.0.is_some(),
|
"ip_header_exists": &ip_header.0.is_some(),
|
||||||
"ip_header_match": ip_header_name == CONFIG.ip_header(),
|
"ip_header_match": ip_header_name == CONFIG.ip_header(),
|
||||||
|
|||||||
@@ -49,6 +49,7 @@ struct RegisterData {
|
|||||||
MasterPasswordHint: Option<String>,
|
MasterPasswordHint: Option<String>,
|
||||||
Name: Option<String>,
|
Name: Option<String>,
|
||||||
Token: Option<String>,
|
Token: Option<String>,
|
||||||
|
#[allow(dead_code)]
|
||||||
OrganizationUserId: Option<String>,
|
OrganizationUserId: Option<String>,
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -62,11 +63,12 @@ struct KeysData {
|
|||||||
#[post("/accounts/register", data = "<data>")]
|
#[post("/accounts/register", data = "<data>")]
|
||||||
fn register(data: JsonUpcase<RegisterData>, conn: DbConn) -> EmptyResult {
|
fn register(data: JsonUpcase<RegisterData>, conn: DbConn) -> EmptyResult {
|
||||||
let data: RegisterData = data.into_inner().data;
|
let data: RegisterData = data.into_inner().data;
|
||||||
|
let email = data.Email.to_lowercase();
|
||||||
|
|
||||||
let mut user = match User::find_by_mail(&data.Email, &conn) {
|
let mut user = match User::find_by_mail(&email, &conn) {
|
||||||
Some(user) => {
|
Some(user) => {
|
||||||
if !user.password_hash.is_empty() {
|
if !user.password_hash.is_empty() {
|
||||||
if CONFIG.is_signup_allowed(&data.Email) {
|
if CONFIG.is_signup_allowed(&email) {
|
||||||
err!("User already exists")
|
err!("User already exists")
|
||||||
} else {
|
} else {
|
||||||
err!("Registration not allowed or user already exists")
|
err!("Registration not allowed or user already exists")
|
||||||
@@ -75,20 +77,24 @@ fn register(data: JsonUpcase<RegisterData>, conn: DbConn) -> EmptyResult {
|
|||||||
|
|
||||||
if let Some(token) = data.Token {
|
if let Some(token) = data.Token {
|
||||||
let claims = decode_invite(&token)?;
|
let claims = decode_invite(&token)?;
|
||||||
if claims.email == data.Email {
|
if claims.email == email {
|
||||||
user
|
user
|
||||||
} else {
|
} else {
|
||||||
err!("Registration email does not match invite email")
|
err!("Registration email does not match invite email")
|
||||||
}
|
}
|
||||||
} else if Invitation::take(&data.Email, &conn) {
|
} else if Invitation::take(&email, &conn) {
|
||||||
for mut user_org in UserOrganization::find_invited_by_user(&user.uuid, &conn).iter_mut() {
|
for mut user_org in UserOrganization::find_invited_by_user(&user.uuid, &conn).iter_mut() {
|
||||||
user_org.status = UserOrgStatus::Accepted as i32;
|
user_org.status = UserOrgStatus::Accepted as i32;
|
||||||
user_org.save(&conn)?;
|
user_org.save(&conn)?;
|
||||||
}
|
}
|
||||||
|
|
||||||
user
|
user
|
||||||
} else if CONFIG.is_signup_allowed(&data.Email) {
|
} else if CONFIG.is_signup_allowed(&email) {
|
||||||
err!("Account with this email already exists")
|
// check if it's invited by emergency contact
|
||||||
|
match EmergencyAccess::find_invited_by_grantee_email(&data.Email, &conn) {
|
||||||
|
Some(_) => user,
|
||||||
|
_ => err!("Account with this email already exists"),
|
||||||
|
}
|
||||||
} else {
|
} else {
|
||||||
err!("Registration not allowed or user already exists")
|
err!("Registration not allowed or user already exists")
|
||||||
}
|
}
|
||||||
@@ -97,8 +103,8 @@ fn register(data: JsonUpcase<RegisterData>, conn: DbConn) -> EmptyResult {
|
|||||||
// Order is important here; the invitation check must come first
|
// Order is important here; the invitation check must come first
|
||||||
// because the vaultwarden admin can invite anyone, regardless
|
// because the vaultwarden admin can invite anyone, regardless
|
||||||
// of other signup restrictions.
|
// of other signup restrictions.
|
||||||
if Invitation::take(&data.Email, &conn) || CONFIG.is_signup_allowed(&data.Email) {
|
if Invitation::take(&email, &conn) || CONFIG.is_signup_allowed(&email) {
|
||||||
User::new(data.Email.clone())
|
User::new(email.clone())
|
||||||
} else {
|
} else {
|
||||||
err!("Registration not allowed or user already exists")
|
err!("Registration not allowed or user already exists")
|
||||||
}
|
}
|
||||||
@@ -106,7 +112,7 @@ fn register(data: JsonUpcase<RegisterData>, conn: DbConn) -> EmptyResult {
|
|||||||
};
|
};
|
||||||
|
|
||||||
// Make sure we don't leave a lingering invitation.
|
// Make sure we don't leave a lingering invitation.
|
||||||
Invitation::take(&data.Email, &conn);
|
Invitation::take(&email, &conn);
|
||||||
|
|
||||||
if let Some(client_kdf_iter) = data.KdfIterations {
|
if let Some(client_kdf_iter) = data.KdfIterations {
|
||||||
user.client_kdf_iter = client_kdf_iter;
|
user.client_kdf_iter = client_kdf_iter;
|
||||||
@@ -233,7 +239,7 @@ fn post_password(data: JsonUpcase<ChangePassData>, headers: Headers, conn: DbCon
|
|||||||
|
|
||||||
user.set_password(
|
user.set_password(
|
||||||
&data.NewMasterPasswordHash,
|
&data.NewMasterPasswordHash,
|
||||||
Some(vec![String::from("post_rotatekey"), String::from("get_contacts")]),
|
Some(vec![String::from("post_rotatekey"), String::from("get_contacts"), String::from("get_public_keys")]),
|
||||||
);
|
);
|
||||||
user.akey = data.Key;
|
user.akey = data.Key;
|
||||||
user.save(&conn)
|
user.save(&conn)
|
||||||
@@ -448,7 +454,7 @@ fn post_email(data: JsonUpcase<ChangeEmailData>, headers: Headers, conn: DbConn)
|
|||||||
}
|
}
|
||||||
|
|
||||||
#[post("/accounts/verify-email")]
|
#[post("/accounts/verify-email")]
|
||||||
fn post_verify_email(headers: Headers, _conn: DbConn) -> EmptyResult {
|
fn post_verify_email(headers: Headers) -> EmptyResult {
|
||||||
let user = headers.user;
|
let user = headers.user;
|
||||||
|
|
||||||
if !CONFIG.mail_enabled() {
|
if !CONFIG.mail_enabled() {
|
||||||
@@ -648,7 +654,7 @@ struct VerifyPasswordData {
|
|||||||
}
|
}
|
||||||
|
|
||||||
#[post("/accounts/verify-password", data = "<data>")]
|
#[post("/accounts/verify-password", data = "<data>")]
|
||||||
fn verify_password(data: JsonUpcase<VerifyPasswordData>, headers: Headers, _conn: DbConn) -> EmptyResult {
|
fn verify_password(data: JsonUpcase<VerifyPasswordData>, headers: Headers) -> EmptyResult {
|
||||||
let data: VerifyPasswordData = data.into_inner().data;
|
let data: VerifyPasswordData = data.into_inner().data;
|
||||||
let user = headers.user;
|
let user = headers.user;
|
||||||
|
|
||||||
|
|||||||
@@ -105,7 +105,7 @@ fn sync(data: Form<SyncData>, headers: Headers, conn: DbConn) -> Json<Value> {
|
|||||||
let collections_json: Vec<Value> =
|
let collections_json: Vec<Value> =
|
||||||
collections.iter().map(|c| c.to_json_details(&headers.user.uuid, &conn)).collect();
|
collections.iter().map(|c| c.to_json_details(&headers.user.uuid, &conn)).collect();
|
||||||
|
|
||||||
let policies = OrgPolicy::find_by_user(&headers.user.uuid, &conn);
|
let policies = OrgPolicy::find_confirmed_by_user(&headers.user.uuid, &conn);
|
||||||
let policies_json: Vec<Value> = policies.iter().map(OrgPolicy::to_json).collect();
|
let policies_json: Vec<Value> = policies.iter().map(OrgPolicy::to_json).collect();
|
||||||
|
|
||||||
let ciphers = Cipher::find_by_user_visible(&headers.user.uuid, &conn);
|
let ciphers = Cipher::find_by_user_visible(&headers.user.uuid, &conn);
|
||||||
@@ -248,7 +248,7 @@ fn post_ciphers_create(data: JsonUpcase<ShareCipherData>, headers: Headers, conn
|
|||||||
// This check is usually only needed in update_cipher_from_data(), but we
|
// This check is usually only needed in update_cipher_from_data(), but we
|
||||||
// need it here as well to avoid creating an empty cipher in the call to
|
// need it here as well to avoid creating an empty cipher in the call to
|
||||||
// cipher.save() below.
|
// cipher.save() below.
|
||||||
enforce_personal_ownership_policy(&data.Cipher, &headers, &conn)?;
|
enforce_personal_ownership_policy(Some(&data.Cipher), &headers, &conn)?;
|
||||||
|
|
||||||
let mut cipher = Cipher::new(data.Cipher.Type, data.Cipher.Name.clone());
|
let mut cipher = Cipher::new(data.Cipher.Type, data.Cipher.Name.clone());
|
||||||
cipher.user_uuid = Some(headers.user.uuid.clone());
|
cipher.user_uuid = Some(headers.user.uuid.clone());
|
||||||
@@ -289,8 +289,8 @@ fn post_ciphers(data: JsonUpcase<CipherData>, headers: Headers, conn: DbConn, nt
|
|||||||
/// allowed to delete or share such ciphers to an org, however.
|
/// allowed to delete or share such ciphers to an org, however.
|
||||||
///
|
///
|
||||||
/// Ref: https://bitwarden.com/help/article/policies/#personal-ownership
|
/// Ref: https://bitwarden.com/help/article/policies/#personal-ownership
|
||||||
fn enforce_personal_ownership_policy(data: &CipherData, headers: &Headers, conn: &DbConn) -> EmptyResult {
|
fn enforce_personal_ownership_policy(data: Option<&CipherData>, headers: &Headers, conn: &DbConn) -> EmptyResult {
|
||||||
if data.OrganizationId.is_none() {
|
if data.is_none() || data.unwrap().OrganizationId.is_none() {
|
||||||
let user_uuid = &headers.user.uuid;
|
let user_uuid = &headers.user.uuid;
|
||||||
let policy_type = OrgPolicyType::PersonalOwnership;
|
let policy_type = OrgPolicyType::PersonalOwnership;
|
||||||
if OrgPolicy::is_applicable_to_user(user_uuid, policy_type, conn) {
|
if OrgPolicy::is_applicable_to_user(user_uuid, policy_type, conn) {
|
||||||
@@ -309,7 +309,7 @@ pub fn update_cipher_from_data(
|
|||||||
nt: &Notify,
|
nt: &Notify,
|
||||||
ut: UpdateType,
|
ut: UpdateType,
|
||||||
) -> EmptyResult {
|
) -> EmptyResult {
|
||||||
enforce_personal_ownership_policy(&data, headers, conn)?;
|
enforce_personal_ownership_policy(Some(&data), headers, conn)?;
|
||||||
|
|
||||||
// Check that the client isn't updating an existing cipher with stale data.
|
// Check that the client isn't updating an existing cipher with stale data.
|
||||||
if let Some(dt) = data.LastKnownRevisionDate {
|
if let Some(dt) = data.LastKnownRevisionDate {
|
||||||
@@ -458,6 +458,8 @@ struct RelationsData {
|
|||||||
|
|
||||||
#[post("/ciphers/import", data = "<data>")]
|
#[post("/ciphers/import", data = "<data>")]
|
||||||
fn post_ciphers_import(data: JsonUpcase<ImportData>, headers: Headers, conn: DbConn, nt: Notify) -> EmptyResult {
|
fn post_ciphers_import(data: JsonUpcase<ImportData>, headers: Headers, conn: DbConn, nt: Notify) -> EmptyResult {
|
||||||
|
enforce_personal_ownership_policy(None, &headers, &conn)?;
|
||||||
|
|
||||||
let data: ImportData = data.into_inner().data;
|
let data: ImportData = data.into_inner().data;
|
||||||
|
|
||||||
// Read and create the folders
|
// Read and create the folders
|
||||||
@@ -687,12 +689,6 @@ fn put_cipher_share_selected(
|
|||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
let attachments = Attachment::find_by_ciphers(cipher_ids, &conn);
|
|
||||||
|
|
||||||
if !attachments.is_empty() {
|
|
||||||
err!("Ciphers should not have any attachments.")
|
|
||||||
}
|
|
||||||
|
|
||||||
while let Some(cipher) = data.Ciphers.pop() {
|
while let Some(cipher) = data.Ciphers.pop() {
|
||||||
let mut shared_cipher_data = ShareCipherData {
|
let mut shared_cipher_data = ShareCipherData {
|
||||||
Cipher: cipher,
|
Cipher: cipher,
|
||||||
@@ -783,10 +779,7 @@ struct AttachmentRequestData {
|
|||||||
Key: String,
|
Key: String,
|
||||||
FileName: String,
|
FileName: String,
|
||||||
FileSize: i32,
|
FileSize: i32,
|
||||||
// We check org owner/admin status via is_write_accessible_to_user(),
|
AdminRequest: Option<bool>, // true when attaching from an org vault view
|
||||||
// so we can just ignore this field.
|
|
||||||
//
|
|
||||||
// AdminRequest: bool,
|
|
||||||
}
|
}
|
||||||
|
|
||||||
enum FileUploadType {
|
enum FileUploadType {
|
||||||
@@ -821,14 +814,17 @@ fn post_attachment_v2(
|
|||||||
attachment.save(&conn).expect("Error saving attachment");
|
attachment.save(&conn).expect("Error saving attachment");
|
||||||
|
|
||||||
let url = format!("/ciphers/{}/attachment/{}", cipher.uuid, attachment_id);
|
let url = format!("/ciphers/{}/attachment/{}", cipher.uuid, attachment_id);
|
||||||
|
let response_key = match data.AdminRequest {
|
||||||
|
Some(b) if b => "CipherMiniResponse",
|
||||||
|
_ => "CipherResponse",
|
||||||
|
};
|
||||||
|
|
||||||
Ok(Json(json!({ // AttachmentUploadDataResponseModel
|
Ok(Json(json!({ // AttachmentUploadDataResponseModel
|
||||||
"Object": "attachment-fileUpload",
|
"Object": "attachment-fileUpload",
|
||||||
"AttachmentId": attachment_id,
|
"AttachmentId": attachment_id,
|
||||||
"Url": url,
|
"Url": url,
|
||||||
"FileUploadType": FileUploadType::Direct as i32,
|
"FileUploadType": FileUploadType::Direct as i32,
|
||||||
"CipherResponse": cipher.to_json(&headers.host, &headers.user.uuid, &conn),
|
response_key: cipher.to_json(&headers.host, &headers.user.uuid, &conn),
|
||||||
"CipherMiniResponse": null,
|
|
||||||
})))
|
})))
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -1,24 +1,804 @@
|
|||||||
|
use chrono::{Duration, Utc};
|
||||||
use rocket::Route;
|
use rocket::Route;
|
||||||
use rocket_contrib::json::Json;
|
use rocket_contrib::json::Json;
|
||||||
|
use serde_json::Value;
|
||||||
|
use std::borrow::Borrow;
|
||||||
|
|
||||||
use crate::{api::JsonResult, auth::Headers, db::DbConn};
|
use crate::{
|
||||||
|
api::{EmptyResult, JsonResult, JsonUpcase, NumberOrString},
|
||||||
|
auth::{decode_emergency_access_invite, Headers},
|
||||||
|
db::{models::*, DbConn, DbPool},
|
||||||
|
mail, CONFIG,
|
||||||
|
};
|
||||||
|
|
||||||
pub fn routes() -> Vec<Route> {
|
pub fn routes() -> Vec<Route> {
|
||||||
routes![get_contacts,]
|
routes![
|
||||||
|
get_contacts,
|
||||||
|
get_grantees,
|
||||||
|
get_emergency_access,
|
||||||
|
put_emergency_access,
|
||||||
|
delete_emergency_access,
|
||||||
|
post_delete_emergency_access,
|
||||||
|
send_invite,
|
||||||
|
resend_invite,
|
||||||
|
accept_invite,
|
||||||
|
confirm_emergency_access,
|
||||||
|
initiate_emergency_access,
|
||||||
|
approve_emergency_access,
|
||||||
|
reject_emergency_access,
|
||||||
|
takeover_emergency_access,
|
||||||
|
password_emergency_access,
|
||||||
|
view_emergency_access,
|
||||||
|
policies_emergency_access,
|
||||||
|
]
|
||||||
}
|
}
|
||||||
|
|
||||||
/// This endpoint is expected to return at least something.
|
// region get
|
||||||
/// If we return an error message that will trigger error toasts for the user.
|
|
||||||
/// To prevent this we just return an empty json result with no Data.
|
|
||||||
/// When this feature is going to be implemented it also needs to return this empty Data
|
|
||||||
/// instead of throwing an error/4XX unless it really is an error.
|
|
||||||
#[get("/emergency-access/trusted")]
|
#[get("/emergency-access/trusted")]
|
||||||
fn get_contacts(_headers: Headers, _conn: DbConn) -> JsonResult {
|
fn get_contacts(headers: Headers, conn: DbConn) -> JsonResult {
|
||||||
debug!("Emergency access is not supported.");
|
check_emergency_access_allowed()?;
|
||||||
|
|
||||||
|
let emergency_access_list = EmergencyAccess::find_all_by_grantor_uuid(&headers.user.uuid, &conn);
|
||||||
|
|
||||||
|
let emergency_access_list_json: Vec<Value> =
|
||||||
|
emergency_access_list.iter().map(|e| e.to_json_grantee_details(&conn)).collect();
|
||||||
|
|
||||||
Ok(Json(json!({
|
Ok(Json(json!({
|
||||||
"Data": [],
|
"Data": emergency_access_list_json,
|
||||||
"Object": "list",
|
"Object": "list",
|
||||||
"ContinuationToken": null
|
"ContinuationToken": null
|
||||||
})))
|
})))
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[get("/emergency-access/granted")]
|
||||||
|
fn get_grantees(headers: Headers, conn: DbConn) -> JsonResult {
|
||||||
|
check_emergency_access_allowed()?;
|
||||||
|
|
||||||
|
let emergency_access_list = EmergencyAccess::find_all_by_grantee_uuid(&headers.user.uuid, &conn);
|
||||||
|
|
||||||
|
let emergency_access_list_json: Vec<Value> =
|
||||||
|
emergency_access_list.iter().map(|e| e.to_json_grantor_details(&conn)).collect();
|
||||||
|
|
||||||
|
Ok(Json(json!({
|
||||||
|
"Data": emergency_access_list_json,
|
||||||
|
"Object": "list",
|
||||||
|
"ContinuationToken": null
|
||||||
|
})))
|
||||||
|
}
|
||||||
|
|
||||||
|
#[get("/emergency-access/<emer_id>")]
|
||||||
|
fn get_emergency_access(emer_id: String, conn: DbConn) -> JsonResult {
|
||||||
|
check_emergency_access_allowed()?;
|
||||||
|
|
||||||
|
match EmergencyAccess::find_by_uuid(&emer_id, &conn) {
|
||||||
|
Some(emergency_access) => Ok(Json(emergency_access.to_json_grantee_details(&conn))),
|
||||||
|
None => err!("Emergency access not valid."),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// endregion
|
||||||
|
|
||||||
|
// region put/post
|
||||||
|
|
||||||
|
#[derive(Deserialize, Debug)]
|
||||||
|
#[allow(non_snake_case)]
|
||||||
|
struct EmergencyAccessUpdateData {
|
||||||
|
Type: NumberOrString,
|
||||||
|
WaitTimeDays: i32,
|
||||||
|
KeyEncrypted: Option<String>,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[put("/emergency-access/<emer_id>", data = "<data>")]
|
||||||
|
fn put_emergency_access(emer_id: String, data: JsonUpcase<EmergencyAccessUpdateData>, conn: DbConn) -> JsonResult {
|
||||||
|
post_emergency_access(emer_id, data, conn)
|
||||||
|
}
|
||||||
|
|
||||||
|
#[post("/emergency-access/<emer_id>", data = "<data>")]
|
||||||
|
fn post_emergency_access(emer_id: String, data: JsonUpcase<EmergencyAccessUpdateData>, conn: DbConn) -> JsonResult {
|
||||||
|
check_emergency_access_allowed()?;
|
||||||
|
|
||||||
|
let data: EmergencyAccessUpdateData = data.into_inner().data;
|
||||||
|
|
||||||
|
let mut emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn) {
|
||||||
|
Some(emergency_access) => emergency_access,
|
||||||
|
None => err!("Emergency access not valid."),
|
||||||
|
};
|
||||||
|
|
||||||
|
let new_type = match EmergencyAccessType::from_str(&data.Type.into_string()) {
|
||||||
|
Some(new_type) => new_type as i32,
|
||||||
|
None => err!("Invalid emergency access type."),
|
||||||
|
};
|
||||||
|
|
||||||
|
emergency_access.atype = new_type;
|
||||||
|
emergency_access.wait_time_days = data.WaitTimeDays;
|
||||||
|
emergency_access.key_encrypted = data.KeyEncrypted;
|
||||||
|
|
||||||
|
emergency_access.save(&conn)?;
|
||||||
|
Ok(Json(emergency_access.to_json()))
|
||||||
|
}
|
||||||
|
|
||||||
|
// endregion
|
||||||
|
|
||||||
|
// region delete
|
||||||
|
|
||||||
|
#[delete("/emergency-access/<emer_id>")]
|
||||||
|
fn delete_emergency_access(emer_id: String, headers: Headers, conn: DbConn) -> EmptyResult {
|
||||||
|
check_emergency_access_allowed()?;
|
||||||
|
|
||||||
|
let grantor_user = headers.user;
|
||||||
|
|
||||||
|
let emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn) {
|
||||||
|
Some(emer) => {
|
||||||
|
if emer.grantor_uuid != grantor_user.uuid && emer.grantee_uuid != Some(grantor_user.uuid) {
|
||||||
|
err!("Emergency access not valid.")
|
||||||
|
}
|
||||||
|
emer
|
||||||
|
}
|
||||||
|
None => err!("Emergency access not valid."),
|
||||||
|
};
|
||||||
|
emergency_access.delete(&conn)?;
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
#[post("/emergency-access/<emer_id>/delete")]
|
||||||
|
fn post_delete_emergency_access(emer_id: String, headers: Headers, conn: DbConn) -> EmptyResult {
|
||||||
|
delete_emergency_access(emer_id, headers, conn)
|
||||||
|
}
|
||||||
|
|
||||||
|
// endregion
|
||||||
|
|
||||||
|
// region invite
|
||||||
|
|
||||||
|
#[derive(Deserialize, Debug)]
|
||||||
|
#[allow(non_snake_case)]
|
||||||
|
struct EmergencyAccessInviteData {
|
||||||
|
Email: String,
|
||||||
|
Type: NumberOrString,
|
||||||
|
WaitTimeDays: i32,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[post("/emergency-access/invite", data = "<data>")]
|
||||||
|
fn send_invite(data: JsonUpcase<EmergencyAccessInviteData>, headers: Headers, conn: DbConn) -> EmptyResult {
|
||||||
|
check_emergency_access_allowed()?;
|
||||||
|
|
||||||
|
let data: EmergencyAccessInviteData = data.into_inner().data;
|
||||||
|
let email = data.Email.to_lowercase();
|
||||||
|
let wait_time_days = data.WaitTimeDays;
|
||||||
|
|
||||||
|
let emergency_access_status = EmergencyAccessStatus::Invited as i32;
|
||||||
|
|
||||||
|
let new_type = match EmergencyAccessType::from_str(&data.Type.into_string()) {
|
||||||
|
Some(new_type) => new_type as i32,
|
||||||
|
None => err!("Invalid emergency access type."),
|
||||||
|
};
|
||||||
|
|
||||||
|
let grantor_user = headers.user;
|
||||||
|
|
||||||
|
// avoid setting yourself as emergency contact
|
||||||
|
if email == grantor_user.email {
|
||||||
|
err!("You can not set yourself as an emergency contact.")
|
||||||
|
}
|
||||||
|
|
||||||
|
let grantee_user = match User::find_by_mail(&email, &conn) {
|
||||||
|
None => {
|
||||||
|
if !CONFIG.signups_allowed() {
|
||||||
|
err!(format!("Grantee user does not exist: {}", email))
|
||||||
|
}
|
||||||
|
|
||||||
|
if !CONFIG.is_email_domain_allowed(&email) {
|
||||||
|
err!("Email domain not eligible for invitations")
|
||||||
|
}
|
||||||
|
|
||||||
|
if !CONFIG.mail_enabled() {
|
||||||
|
let invitation = Invitation::new(email.clone());
|
||||||
|
invitation.save(&conn)?;
|
||||||
|
}
|
||||||
|
|
||||||
|
let mut user = User::new(email.clone());
|
||||||
|
user.save(&conn)?;
|
||||||
|
user
|
||||||
|
}
|
||||||
|
Some(user) => user,
|
||||||
|
};
|
||||||
|
|
||||||
|
if EmergencyAccess::find_by_grantor_uuid_and_grantee_uuid_or_email(
|
||||||
|
&grantor_user.uuid,
|
||||||
|
&grantee_user.uuid,
|
||||||
|
&grantee_user.email,
|
||||||
|
&conn,
|
||||||
|
)
|
||||||
|
.is_some()
|
||||||
|
{
|
||||||
|
err!(format!("Grantee user already invited: {}", email))
|
||||||
|
}
|
||||||
|
|
||||||
|
let mut new_emergency_access = EmergencyAccess::new(
|
||||||
|
grantor_user.uuid.clone(),
|
||||||
|
Some(grantee_user.email.clone()),
|
||||||
|
emergency_access_status,
|
||||||
|
new_type,
|
||||||
|
wait_time_days,
|
||||||
|
);
|
||||||
|
new_emergency_access.save(&conn)?;
|
||||||
|
|
||||||
|
if CONFIG.mail_enabled() {
|
||||||
|
mail::send_emergency_access_invite(
|
||||||
|
&grantee_user.email,
|
||||||
|
&grantee_user.uuid,
|
||||||
|
Some(new_emergency_access.uuid),
|
||||||
|
Some(grantor_user.name.clone()),
|
||||||
|
Some(grantor_user.email),
|
||||||
|
)?;
|
||||||
|
} else {
|
||||||
|
// Automatically mark user as accepted if no email invites
|
||||||
|
match User::find_by_mail(&email, &conn) {
|
||||||
|
Some(user) => {
|
||||||
|
match accept_invite_process(user.uuid, new_emergency_access.uuid, Some(email), conn.borrow()) {
|
||||||
|
Ok(v) => (v),
|
||||||
|
Err(e) => err!(e.to_string()),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
None => err!("Grantee user not found."),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
#[post("/emergency-access/<emer_id>/reinvite")]
|
||||||
|
fn resend_invite(emer_id: String, headers: Headers, conn: DbConn) -> EmptyResult {
|
||||||
|
check_emergency_access_allowed()?;
|
||||||
|
|
||||||
|
let emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn) {
|
||||||
|
Some(emer) => emer,
|
||||||
|
None => err!("Emergency access not valid."),
|
||||||
|
};
|
||||||
|
|
||||||
|
if emergency_access.grantor_uuid != headers.user.uuid {
|
||||||
|
err!("Emergency access not valid.");
|
||||||
|
}
|
||||||
|
|
||||||
|
if emergency_access.status != EmergencyAccessStatus::Invited as i32 {
|
||||||
|
err!("The grantee user is already accepted or confirmed to the organization");
|
||||||
|
}
|
||||||
|
|
||||||
|
let email = match emergency_access.email.clone() {
|
||||||
|
Some(email) => email,
|
||||||
|
None => err!("Email not valid."),
|
||||||
|
};
|
||||||
|
|
||||||
|
let grantee_user = match User::find_by_mail(&email, &conn) {
|
||||||
|
Some(user) => user,
|
||||||
|
None => err!("Grantee user not found."),
|
||||||
|
};
|
||||||
|
|
||||||
|
let grantor_user = headers.user;
|
||||||
|
|
||||||
|
if CONFIG.mail_enabled() {
|
||||||
|
mail::send_emergency_access_invite(
|
||||||
|
&email,
|
||||||
|
&grantor_user.uuid,
|
||||||
|
Some(emergency_access.uuid),
|
||||||
|
Some(grantor_user.name.clone()),
|
||||||
|
Some(grantor_user.email),
|
||||||
|
)?;
|
||||||
|
} else {
|
||||||
|
if Invitation::find_by_mail(&email, &conn).is_none() {
|
||||||
|
let invitation = Invitation::new(email);
|
||||||
|
invitation.save(&conn)?;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Automatically mark user as accepted if no email invites
|
||||||
|
match accept_invite_process(grantee_user.uuid, emergency_access.uuid, emergency_access.email, conn.borrow()) {
|
||||||
|
Ok(v) => (v),
|
||||||
|
Err(e) => err!(e.to_string()),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Deserialize)]
|
||||||
|
#[allow(non_snake_case)]
|
||||||
|
struct AcceptData {
|
||||||
|
Token: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[post("/emergency-access/<emer_id>/accept", data = "<data>")]
|
||||||
|
fn accept_invite(emer_id: String, data: JsonUpcase<AcceptData>, conn: DbConn) -> EmptyResult {
|
||||||
|
check_emergency_access_allowed()?;
|
||||||
|
|
||||||
|
let data: AcceptData = data.into_inner().data;
|
||||||
|
let token = &data.Token;
|
||||||
|
let claims = decode_emergency_access_invite(token)?;
|
||||||
|
|
||||||
|
let grantee_user = match User::find_by_mail(&claims.email, &conn) {
|
||||||
|
Some(user) => {
|
||||||
|
Invitation::take(&claims.email, &conn);
|
||||||
|
user
|
||||||
|
}
|
||||||
|
None => err!("Invited user not found"),
|
||||||
|
};
|
||||||
|
|
||||||
|
let emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn) {
|
||||||
|
Some(emer) => emer,
|
||||||
|
None => err!("Emergency access not valid."),
|
||||||
|
};
|
||||||
|
|
||||||
|
// get grantor user to send Accepted email
|
||||||
|
let grantor_user = match User::find_by_uuid(&emergency_access.grantor_uuid, &conn) {
|
||||||
|
Some(user) => user,
|
||||||
|
None => err!("Grantor user not found."),
|
||||||
|
};
|
||||||
|
|
||||||
|
if (claims.emer_id.is_some() && emer_id == claims.emer_id.unwrap())
|
||||||
|
&& (claims.grantor_name.is_some() && grantor_user.name == claims.grantor_name.unwrap())
|
||||||
|
&& (claims.grantor_email.is_some() && grantor_user.email == claims.grantor_email.unwrap())
|
||||||
|
{
|
||||||
|
match accept_invite_process(grantee_user.uuid.clone(), emer_id, Some(grantee_user.email.clone()), &conn) {
|
||||||
|
Ok(v) => (v),
|
||||||
|
Err(e) => err!(e.to_string()),
|
||||||
|
}
|
||||||
|
|
||||||
|
if CONFIG.mail_enabled() {
|
||||||
|
mail::send_emergency_access_invite_accepted(&grantor_user.email, &grantee_user.email)?;
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(())
|
||||||
|
} else {
|
||||||
|
err!("Emergency access invitation error.")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fn accept_invite_process(grantee_uuid: String, emer_id: String, email: Option<String>, conn: &DbConn) -> EmptyResult {
|
||||||
|
let mut emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, conn) {
|
||||||
|
Some(emer) => emer,
|
||||||
|
None => err!("Emergency access not valid."),
|
||||||
|
};
|
||||||
|
|
||||||
|
let emer_email = emergency_access.email;
|
||||||
|
if emer_email.is_none() || emer_email != email {
|
||||||
|
err!("User email does not match invite.");
|
||||||
|
}
|
||||||
|
|
||||||
|
if emergency_access.status == EmergencyAccessStatus::Accepted as i32 {
|
||||||
|
err!("Emergency contact already accepted.");
|
||||||
|
}
|
||||||
|
|
||||||
|
emergency_access.status = EmergencyAccessStatus::Accepted as i32;
|
||||||
|
emergency_access.grantee_uuid = Some(grantee_uuid);
|
||||||
|
emergency_access.email = None;
|
||||||
|
emergency_access.save(conn)
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Deserialize)]
|
||||||
|
#[allow(non_snake_case)]
|
||||||
|
struct ConfirmData {
|
||||||
|
Key: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[post("/emergency-access/<emer_id>/confirm", data = "<data>")]
|
||||||
|
fn confirm_emergency_access(
|
||||||
|
emer_id: String,
|
||||||
|
data: JsonUpcase<ConfirmData>,
|
||||||
|
headers: Headers,
|
||||||
|
conn: DbConn,
|
||||||
|
) -> JsonResult {
|
||||||
|
check_emergency_access_allowed()?;
|
||||||
|
|
||||||
|
let confirming_user = headers.user;
|
||||||
|
let data: ConfirmData = data.into_inner().data;
|
||||||
|
let key = data.Key;
|
||||||
|
|
||||||
|
let mut emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn) {
|
||||||
|
Some(emer) => emer,
|
||||||
|
None => err!("Emergency access not valid."),
|
||||||
|
};
|
||||||
|
|
||||||
|
if emergency_access.status != EmergencyAccessStatus::Accepted as i32
|
||||||
|
|| emergency_access.grantor_uuid != confirming_user.uuid
|
||||||
|
{
|
||||||
|
err!("Emergency access not valid.")
|
||||||
|
}
|
||||||
|
|
||||||
|
let grantor_user = match User::find_by_uuid(&confirming_user.uuid, &conn) {
|
||||||
|
Some(user) => user,
|
||||||
|
None => err!("Grantor user not found."),
|
||||||
|
};
|
||||||
|
|
||||||
|
if let Some(grantee_uuid) = emergency_access.grantee_uuid.as_ref() {
|
||||||
|
let grantee_user = match User::find_by_uuid(grantee_uuid, &conn) {
|
||||||
|
Some(user) => user,
|
||||||
|
None => err!("Grantee user not found."),
|
||||||
|
};
|
||||||
|
|
||||||
|
emergency_access.status = EmergencyAccessStatus::Confirmed as i32;
|
||||||
|
emergency_access.key_encrypted = Some(key);
|
||||||
|
emergency_access.email = None;
|
||||||
|
|
||||||
|
emergency_access.save(&conn)?;
|
||||||
|
|
||||||
|
if CONFIG.mail_enabled() {
|
||||||
|
mail::send_emergency_access_invite_confirmed(&grantee_user.email, &grantor_user.name)?;
|
||||||
|
}
|
||||||
|
Ok(Json(emergency_access.to_json()))
|
||||||
|
} else {
|
||||||
|
err!("Grantee user not found.")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// endregion
|
||||||
|
|
||||||
|
// region access emergency access
|
||||||
|
|
||||||
|
#[post("/emergency-access/<emer_id>/initiate")]
|
||||||
|
fn initiate_emergency_access(emer_id: String, headers: Headers, conn: DbConn) -> JsonResult {
|
||||||
|
check_emergency_access_allowed()?;
|
||||||
|
|
||||||
|
let initiating_user = headers.user;
|
||||||
|
let mut emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn) {
|
||||||
|
Some(emer) => emer,
|
||||||
|
None => err!("Emergency access not valid."),
|
||||||
|
};
|
||||||
|
|
||||||
|
if emergency_access.status != EmergencyAccessStatus::Confirmed as i32
|
||||||
|
|| emergency_access.grantee_uuid != Some(initiating_user.uuid.clone())
|
||||||
|
{
|
||||||
|
err!("Emergency access not valid.")
|
||||||
|
}
|
||||||
|
|
||||||
|
let grantor_user = match User::find_by_uuid(&emergency_access.grantor_uuid, &conn) {
|
||||||
|
Some(user) => user,
|
||||||
|
None => err!("Grantor user not found."),
|
||||||
|
};
|
||||||
|
|
||||||
|
let now = Utc::now().naive_utc();
|
||||||
|
emergency_access.status = EmergencyAccessStatus::RecoveryInitiated as i32;
|
||||||
|
emergency_access.updated_at = now;
|
||||||
|
emergency_access.recovery_initiated_at = Some(now);
|
||||||
|
emergency_access.last_notification_at = Some(now);
|
||||||
|
emergency_access.save(&conn)?;
|
||||||
|
|
||||||
|
if CONFIG.mail_enabled() {
|
||||||
|
mail::send_emergency_access_recovery_initiated(
|
||||||
|
&grantor_user.email,
|
||||||
|
&initiating_user.name,
|
||||||
|
emergency_access.get_type_as_str(),
|
||||||
|
&emergency_access.wait_time_days.clone().to_string(),
|
||||||
|
)?;
|
||||||
|
}
|
||||||
|
Ok(Json(emergency_access.to_json()))
|
||||||
|
}
|
||||||
|
|
||||||
|
#[post("/emergency-access/<emer_id>/approve")]
|
||||||
|
fn approve_emergency_access(emer_id: String, headers: Headers, conn: DbConn) -> JsonResult {
|
||||||
|
check_emergency_access_allowed()?;
|
||||||
|
|
||||||
|
let approving_user = headers.user;
|
||||||
|
let mut emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn) {
|
||||||
|
Some(emer) => emer,
|
||||||
|
None => err!("Emergency access not valid."),
|
||||||
|
};
|
||||||
|
|
||||||
|
if emergency_access.status != EmergencyAccessStatus::RecoveryInitiated as i32
|
||||||
|
|| emergency_access.grantor_uuid != approving_user.uuid
|
||||||
|
{
|
||||||
|
err!("Emergency access not valid.")
|
||||||
|
}
|
||||||
|
|
||||||
|
let grantor_user = match User::find_by_uuid(&approving_user.uuid, &conn) {
|
||||||
|
Some(user) => user,
|
||||||
|
None => err!("Grantor user not found."),
|
||||||
|
};
|
||||||
|
|
||||||
|
if let Some(grantee_uuid) = emergency_access.grantee_uuid.as_ref() {
|
||||||
|
let grantee_user = match User::find_by_uuid(grantee_uuid, &conn) {
|
||||||
|
Some(user) => user,
|
||||||
|
None => err!("Grantee user not found."),
|
||||||
|
};
|
||||||
|
|
||||||
|
emergency_access.status = EmergencyAccessStatus::RecoveryApproved as i32;
|
||||||
|
emergency_access.save(&conn)?;
|
||||||
|
|
||||||
|
if CONFIG.mail_enabled() {
|
||||||
|
mail::send_emergency_access_recovery_approved(&grantee_user.email, &grantor_user.name)?;
|
||||||
|
}
|
||||||
|
Ok(Json(emergency_access.to_json()))
|
||||||
|
} else {
|
||||||
|
err!("Grantee user not found.")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[post("/emergency-access/<emer_id>/reject")]
|
||||||
|
fn reject_emergency_access(emer_id: String, headers: Headers, conn: DbConn) -> JsonResult {
|
||||||
|
check_emergency_access_allowed()?;
|
||||||
|
|
||||||
|
let rejecting_user = headers.user;
|
||||||
|
let mut emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn) {
|
||||||
|
Some(emer) => emer,
|
||||||
|
None => err!("Emergency access not valid."),
|
||||||
|
};
|
||||||
|
|
||||||
|
if (emergency_access.status != EmergencyAccessStatus::RecoveryInitiated as i32
|
||||||
|
&& emergency_access.status != EmergencyAccessStatus::RecoveryApproved as i32)
|
||||||
|
|| emergency_access.grantor_uuid != rejecting_user.uuid
|
||||||
|
{
|
||||||
|
err!("Emergency access not valid.")
|
||||||
|
}
|
||||||
|
|
||||||
|
let grantor_user = match User::find_by_uuid(&rejecting_user.uuid, &conn) {
|
||||||
|
Some(user) => user,
|
||||||
|
None => err!("Grantor user not found."),
|
||||||
|
};
|
||||||
|
|
||||||
|
if let Some(grantee_uuid) = emergency_access.grantee_uuid.as_ref() {
|
||||||
|
let grantee_user = match User::find_by_uuid(grantee_uuid, &conn) {
|
||||||
|
Some(user) => user,
|
||||||
|
None => err!("Grantee user not found."),
|
||||||
|
};
|
||||||
|
|
||||||
|
emergency_access.status = EmergencyAccessStatus::Confirmed as i32;
|
||||||
|
emergency_access.save(&conn)?;
|
||||||
|
|
||||||
|
if CONFIG.mail_enabled() {
|
||||||
|
mail::send_emergency_access_recovery_rejected(&grantee_user.email, &grantor_user.name)?;
|
||||||
|
}
|
||||||
|
Ok(Json(emergency_access.to_json()))
|
||||||
|
} else {
|
||||||
|
err!("Grantee user not found.")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// endregion
|
||||||
|
|
||||||
|
// region action
|
||||||
|
|
||||||
|
#[post("/emergency-access/<emer_id>/view")]
|
||||||
|
fn view_emergency_access(emer_id: String, headers: Headers, conn: DbConn) -> JsonResult {
|
||||||
|
check_emergency_access_allowed()?;
|
||||||
|
|
||||||
|
let requesting_user = headers.user;
|
||||||
|
let host = headers.host;
|
||||||
|
let emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn) {
|
||||||
|
Some(emer) => emer,
|
||||||
|
None => err!("Emergency access not valid."),
|
||||||
|
};
|
||||||
|
|
||||||
|
if !is_valid_request(&emergency_access, requesting_user.uuid, EmergencyAccessType::View) {
|
||||||
|
err!("Emergency access not valid.")
|
||||||
|
}
|
||||||
|
|
||||||
|
let ciphers = Cipher::find_owned_by_user(&emergency_access.grantor_uuid, &conn);
|
||||||
|
|
||||||
|
let ciphers_json: Vec<Value> =
|
||||||
|
ciphers.iter().map(|c| c.to_json(&host, &emergency_access.grantor_uuid, &conn)).collect();
|
||||||
|
|
||||||
|
Ok(Json(json!({
|
||||||
|
"Ciphers": ciphers_json,
|
||||||
|
"KeyEncrypted": &emergency_access.key_encrypted,
|
||||||
|
"Object": "emergencyAccessView",
|
||||||
|
})))
|
||||||
|
}
|
||||||
|
|
||||||
|
#[post("/emergency-access/<emer_id>/takeover")]
|
||||||
|
fn takeover_emergency_access(emer_id: String, headers: Headers, conn: DbConn) -> JsonResult {
|
||||||
|
check_emergency_access_allowed()?;
|
||||||
|
|
||||||
|
let requesting_user = headers.user;
|
||||||
|
let emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn) {
|
||||||
|
Some(emer) => emer,
|
||||||
|
None => err!("Emergency access not valid."),
|
||||||
|
};
|
||||||
|
|
||||||
|
if !is_valid_request(&emergency_access, requesting_user.uuid, EmergencyAccessType::Takeover) {
|
||||||
|
err!("Emergency access not valid.")
|
||||||
|
}
|
||||||
|
|
||||||
|
let grantor_user = match User::find_by_uuid(&emergency_access.grantor_uuid, &conn) {
|
||||||
|
Some(user) => user,
|
||||||
|
None => err!("Grantor user not found."),
|
||||||
|
};
|
||||||
|
|
||||||
|
Ok(Json(json!({
|
||||||
|
"Kdf": grantor_user.client_kdf_type,
|
||||||
|
"KdfIterations": grantor_user.client_kdf_iter,
|
||||||
|
"KeyEncrypted": &emergency_access.key_encrypted,
|
||||||
|
"Object": "emergencyAccessTakeover",
|
||||||
|
})))
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Deserialize, Debug)]
|
||||||
|
#[allow(non_snake_case)]
|
||||||
|
struct EmergencyAccessPasswordData {
|
||||||
|
NewMasterPasswordHash: String,
|
||||||
|
Key: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[post("/emergency-access/<emer_id>/password", data = "<data>")]
|
||||||
|
fn password_emergency_access(
|
||||||
|
emer_id: String,
|
||||||
|
data: JsonUpcase<EmergencyAccessPasswordData>,
|
||||||
|
headers: Headers,
|
||||||
|
conn: DbConn,
|
||||||
|
) -> EmptyResult {
|
||||||
|
check_emergency_access_allowed()?;
|
||||||
|
|
||||||
|
let data: EmergencyAccessPasswordData = data.into_inner().data;
|
||||||
|
let new_master_password_hash = &data.NewMasterPasswordHash;
|
||||||
|
let key = data.Key;
|
||||||
|
|
||||||
|
let requesting_user = headers.user;
|
||||||
|
let emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn) {
|
||||||
|
Some(emer) => emer,
|
||||||
|
None => err!("Emergency access not valid."),
|
||||||
|
};
|
||||||
|
|
||||||
|
if !is_valid_request(&emergency_access, requesting_user.uuid, EmergencyAccessType::Takeover) {
|
||||||
|
err!("Emergency access not valid.")
|
||||||
|
}
|
||||||
|
|
||||||
|
let mut grantor_user = match User::find_by_uuid(&emergency_access.grantor_uuid, &conn) {
|
||||||
|
Some(user) => user,
|
||||||
|
None => err!("Grantor user not found."),
|
||||||
|
};
|
||||||
|
|
||||||
|
// change grantor_user password
|
||||||
|
grantor_user.set_password(new_master_password_hash, None);
|
||||||
|
grantor_user.akey = key;
|
||||||
|
grantor_user.save(&conn)?;
|
||||||
|
|
||||||
|
// Disable TwoFactor providers since they will otherwise block logins
|
||||||
|
TwoFactor::delete_all_by_user(&grantor_user.uuid, &conn)?;
|
||||||
|
|
||||||
|
// Removing owner, check that there are at least another owner
|
||||||
|
let user_org_grantor = UserOrganization::find_any_state_by_user(&grantor_user.uuid, &conn);
|
||||||
|
|
||||||
|
// Remove grantor from all organisations unless Owner
|
||||||
|
for user_org in user_org_grantor {
|
||||||
|
if user_org.atype != UserOrgType::Owner as i32 {
|
||||||
|
user_org.delete(&conn)?;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
// endregion
|
||||||
|
|
||||||
|
#[get("/emergency-access/<emer_id>/policies")]
|
||||||
|
fn policies_emergency_access(emer_id: String, headers: Headers, conn: DbConn) -> JsonResult {
|
||||||
|
let requesting_user = headers.user;
|
||||||
|
let emergency_access = match EmergencyAccess::find_by_uuid(&emer_id, &conn) {
|
||||||
|
Some(emer) => emer,
|
||||||
|
None => err!("Emergency access not valid."),
|
||||||
|
};
|
||||||
|
|
||||||
|
if !is_valid_request(&emergency_access, requesting_user.uuid, EmergencyAccessType::Takeover) {
|
||||||
|
err!("Emergency access not valid.")
|
||||||
|
}
|
||||||
|
|
||||||
|
let grantor_user = match User::find_by_uuid(&emergency_access.grantor_uuid, &conn) {
|
||||||
|
Some(user) => user,
|
||||||
|
None => err!("Grantor user not found."),
|
||||||
|
};
|
||||||
|
|
||||||
|
let policies = OrgPolicy::find_confirmed_by_user(&grantor_user.uuid, &conn);
|
||||||
|
let policies_json: Vec<Value> = policies.iter().map(OrgPolicy::to_json).collect();
|
||||||
|
|
||||||
|
Ok(Json(json!({
|
||||||
|
"Data": policies_json,
|
||||||
|
"Object": "list",
|
||||||
|
"ContinuationToken": null
|
||||||
|
})))
|
||||||
|
}
|
||||||
|
|
||||||
|
fn is_valid_request(
|
||||||
|
emergency_access: &EmergencyAccess,
|
||||||
|
requesting_user_uuid: String,
|
||||||
|
requested_access_type: EmergencyAccessType,
|
||||||
|
) -> bool {
|
||||||
|
emergency_access.grantee_uuid == Some(requesting_user_uuid)
|
||||||
|
&& emergency_access.status == EmergencyAccessStatus::RecoveryApproved as i32
|
||||||
|
&& emergency_access.atype == requested_access_type as i32
|
||||||
|
}
|
||||||
|
|
||||||
|
fn check_emergency_access_allowed() -> EmptyResult {
|
||||||
|
if !CONFIG.emergency_access_allowed() {
|
||||||
|
err!("Emergency access is not allowed.")
|
||||||
|
}
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn emergency_request_timeout_job(pool: DbPool) {
|
||||||
|
debug!("Start emergency_request_timeout_job");
|
||||||
|
if !CONFIG.emergency_access_allowed() {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
if let Ok(conn) = pool.get() {
|
||||||
|
let emergency_access_list = EmergencyAccess::find_all_recoveries(&conn);
|
||||||
|
|
||||||
|
if emergency_access_list.is_empty() {
|
||||||
|
debug!("No emergency request timeout to approve");
|
||||||
|
}
|
||||||
|
|
||||||
|
for mut emer in emergency_access_list {
|
||||||
|
if emer.recovery_initiated_at.is_some()
|
||||||
|
&& Utc::now().naive_utc()
|
||||||
|
>= emer.recovery_initiated_at.unwrap() + Duration::days(emer.wait_time_days as i64)
|
||||||
|
{
|
||||||
|
emer.status = EmergencyAccessStatus::RecoveryApproved as i32;
|
||||||
|
emer.save(&conn).expect("Cannot save emergency access on job");
|
||||||
|
|
||||||
|
if CONFIG.mail_enabled() {
|
||||||
|
// get grantor user to send Accepted email
|
||||||
|
let grantor_user = User::find_by_uuid(&emer.grantor_uuid, &conn).expect("Grantor user not found.");
|
||||||
|
|
||||||
|
// get grantee user to send Accepted email
|
||||||
|
let grantee_user =
|
||||||
|
User::find_by_uuid(&emer.grantee_uuid.clone().expect("Grantee user invalid."), &conn)
|
||||||
|
.expect("Grantee user not found.");
|
||||||
|
|
||||||
|
mail::send_emergency_access_recovery_timed_out(
|
||||||
|
&grantor_user.email,
|
||||||
|
&grantee_user.name.clone(),
|
||||||
|
emer.get_type_as_str(),
|
||||||
|
)
|
||||||
|
.expect("Error on sending email");
|
||||||
|
|
||||||
|
mail::send_emergency_access_recovery_approved(&grantee_user.email, &grantor_user.name.clone())
|
||||||
|
.expect("Error on sending email");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
error!("Failed to get DB connection while searching emergency request timed out")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn emergency_notification_reminder_job(pool: DbPool) {
|
||||||
|
debug!("Start emergency_notification_reminder_job");
|
||||||
|
if !CONFIG.emergency_access_allowed() {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
if let Ok(conn) = pool.get() {
|
||||||
|
let emergency_access_list = EmergencyAccess::find_all_recoveries(&conn);
|
||||||
|
|
||||||
|
if emergency_access_list.is_empty() {
|
||||||
|
debug!("No emergency request reminder notification to send");
|
||||||
|
}
|
||||||
|
|
||||||
|
for mut emer in emergency_access_list {
|
||||||
|
if (emer.recovery_initiated_at.is_some()
|
||||||
|
&& Utc::now().naive_utc()
|
||||||
|
>= emer.recovery_initiated_at.unwrap() + Duration::days((emer.wait_time_days as i64) - 1))
|
||||||
|
&& (emer.last_notification_at.is_none()
|
||||||
|
|| (emer.last_notification_at.is_some()
|
||||||
|
&& Utc::now().naive_utc() >= emer.last_notification_at.unwrap() + Duration::days(1)))
|
||||||
|
{
|
||||||
|
emer.save(&conn).expect("Cannot save emergency access on job");
|
||||||
|
|
||||||
|
if CONFIG.mail_enabled() {
|
||||||
|
// get grantor user to send Accepted email
|
||||||
|
let grantor_user = User::find_by_uuid(&emer.grantor_uuid, &conn).expect("Grantor user not found.");
|
||||||
|
|
||||||
|
// get grantee user to send Accepted email
|
||||||
|
let grantee_user =
|
||||||
|
User::find_by_uuid(&emer.grantee_uuid.clone().expect("Grantee user invalid."), &conn)
|
||||||
|
.expect("Grantee user not found.");
|
||||||
|
|
||||||
|
mail::send_emergency_access_recovery_reminder(
|
||||||
|
&grantor_user.email,
|
||||||
|
&grantee_user.name.clone(),
|
||||||
|
emer.get_type_as_str(),
|
||||||
|
&emer.wait_time_days.to_string(), // TODO(jjlin): This should be the number of days left.
|
||||||
|
)
|
||||||
|
.expect("Error on sending email");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
error!("Failed to get DB connection while searching emergency notification reminder")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|||||||
@@ -7,7 +7,9 @@ mod sends;
|
|||||||
pub mod two_factor;
|
pub mod two_factor;
|
||||||
|
|
||||||
pub use ciphers::purge_trashed_ciphers;
|
pub use ciphers::purge_trashed_ciphers;
|
||||||
|
pub use emergency_access::{emergency_notification_reminder_job, emergency_request_timeout_job};
|
||||||
pub use sends::purge_sends;
|
pub use sends::purge_sends;
|
||||||
|
pub use two_factor::send_incomplete_2fa_notifications;
|
||||||
|
|
||||||
pub fn routes() -> Vec<Route> {
|
pub fn routes() -> Vec<Route> {
|
||||||
let mut mod_routes =
|
let mut mod_routes =
|
||||||
|
|||||||
@@ -35,12 +35,15 @@ pub fn routes() -> Vec<Route> {
|
|||||||
get_org_users,
|
get_org_users,
|
||||||
send_invite,
|
send_invite,
|
||||||
reinvite_user,
|
reinvite_user,
|
||||||
|
bulk_reinvite_user,
|
||||||
confirm_invite,
|
confirm_invite,
|
||||||
|
bulk_confirm_invite,
|
||||||
accept_invite,
|
accept_invite,
|
||||||
get_user,
|
get_user,
|
||||||
edit_user,
|
edit_user,
|
||||||
put_organization_user,
|
put_organization_user,
|
||||||
delete_user,
|
delete_user,
|
||||||
|
bulk_delete_user,
|
||||||
post_delete_user,
|
post_delete_user,
|
||||||
post_org_import,
|
post_org_import,
|
||||||
list_policies,
|
list_policies,
|
||||||
@@ -52,6 +55,7 @@ pub fn routes() -> Vec<Route> {
|
|||||||
get_plans_tax_rates,
|
get_plans_tax_rates,
|
||||||
import,
|
import,
|
||||||
post_org_keys,
|
post_org_keys,
|
||||||
|
bulk_public_keys,
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -87,11 +91,22 @@ struct OrgKeyData {
|
|||||||
PublicKey: String,
|
PublicKey: String,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[derive(Deserialize, Debug)]
|
||||||
|
#[allow(non_snake_case)]
|
||||||
|
struct OrgBulkIds {
|
||||||
|
Ids: Vec<String>,
|
||||||
|
}
|
||||||
|
|
||||||
#[post("/organizations", data = "<data>")]
|
#[post("/organizations", data = "<data>")]
|
||||||
fn create_organization(headers: Headers, data: JsonUpcase<OrgData>, conn: DbConn) -> JsonResult {
|
fn create_organization(headers: Headers, data: JsonUpcase<OrgData>, conn: DbConn) -> JsonResult {
|
||||||
if !CONFIG.is_org_creation_allowed(&headers.user.email) {
|
if !CONFIG.is_org_creation_allowed(&headers.user.email) {
|
||||||
err!("User not allowed to create organizations")
|
err!("User not allowed to create organizations")
|
||||||
}
|
}
|
||||||
|
if OrgPolicy::is_applicable_to_user(&headers.user.uuid, OrgPolicyType::SingleOrg, &conn) {
|
||||||
|
err!(
|
||||||
|
"You may not create an organization. You belong to an organization which has a policy that prohibits you from being a member of any other organization."
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
let data: OrgData = data.into_inner().data;
|
let data: OrgData = data.into_inner().data;
|
||||||
let (private_key, public_key) = if data.Keys.is_some() {
|
let (private_key, public_key) = if data.Keys.is_some() {
|
||||||
@@ -367,7 +382,7 @@ fn delete_organization_collection(
|
|||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Deserialize, Debug)]
|
#[derive(Deserialize, Debug)]
|
||||||
#[allow(non_snake_case)]
|
#[allow(non_snake_case, dead_code)]
|
||||||
struct DeleteCollectionData {
|
struct DeleteCollectionData {
|
||||||
Id: String,
|
Id: String,
|
||||||
OrgId: String,
|
OrgId: String,
|
||||||
@@ -540,18 +555,19 @@ fn send_invite(org_id: String, data: JsonUpcase<InviteData>, headers: AdminHeade
|
|||||||
}
|
}
|
||||||
|
|
||||||
for email in data.Emails.iter() {
|
for email in data.Emails.iter() {
|
||||||
|
let email = email.to_lowercase();
|
||||||
let mut user_org_status = if CONFIG.mail_enabled() {
|
let mut user_org_status = if CONFIG.mail_enabled() {
|
||||||
UserOrgStatus::Invited as i32
|
UserOrgStatus::Invited as i32
|
||||||
} else {
|
} else {
|
||||||
UserOrgStatus::Accepted as i32 // Automatically mark user as accepted if no email invites
|
UserOrgStatus::Accepted as i32 // Automatically mark user as accepted if no email invites
|
||||||
};
|
};
|
||||||
let user = match User::find_by_mail(email, &conn) {
|
let user = match User::find_by_mail(&email, &conn) {
|
||||||
None => {
|
None => {
|
||||||
if !CONFIG.invitations_allowed() {
|
if !CONFIG.invitations_allowed() {
|
||||||
err!(format!("User does not exist: {}", email))
|
err!(format!("User does not exist: {}", email))
|
||||||
}
|
}
|
||||||
|
|
||||||
if !CONFIG.is_email_domain_allowed(email) {
|
if !CONFIG.is_email_domain_allowed(&email) {
|
||||||
err!("Email domain not eligible for invitations")
|
err!("Email domain not eligible for invitations")
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -601,7 +617,7 @@ fn send_invite(org_id: String, data: JsonUpcase<InviteData>, headers: AdminHeade
|
|||||||
};
|
};
|
||||||
|
|
||||||
mail::send_invite(
|
mail::send_invite(
|
||||||
email,
|
&email,
|
||||||
&user.uuid,
|
&user.uuid,
|
||||||
Some(org_id.clone()),
|
Some(org_id.clone()),
|
||||||
Some(new_user.uuid),
|
Some(new_user.uuid),
|
||||||
@@ -614,8 +630,44 @@ fn send_invite(org_id: String, data: JsonUpcase<InviteData>, headers: AdminHeade
|
|||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[post("/organizations/<org_id>/users/reinvite", data = "<data>")]
|
||||||
|
fn bulk_reinvite_user(
|
||||||
|
org_id: String,
|
||||||
|
data: JsonUpcase<OrgBulkIds>,
|
||||||
|
headers: AdminHeaders,
|
||||||
|
conn: DbConn,
|
||||||
|
) -> Json<Value> {
|
||||||
|
let data: OrgBulkIds = data.into_inner().data;
|
||||||
|
|
||||||
|
let mut bulk_response = Vec::new();
|
||||||
|
for org_user_id in data.Ids {
|
||||||
|
let err_msg = match _reinvite_user(&org_id, &org_user_id, &headers.user.email, &conn) {
|
||||||
|
Ok(_) => String::from(""),
|
||||||
|
Err(e) => format!("{:?}", e),
|
||||||
|
};
|
||||||
|
|
||||||
|
bulk_response.push(json!(
|
||||||
|
{
|
||||||
|
"Object": "OrganizationBulkConfirmResponseModel",
|
||||||
|
"Id": org_user_id,
|
||||||
|
"Error": err_msg
|
||||||
|
}
|
||||||
|
))
|
||||||
|
}
|
||||||
|
|
||||||
|
Json(json!({
|
||||||
|
"Data": bulk_response,
|
||||||
|
"Object": "list",
|
||||||
|
"ContinuationToken": null
|
||||||
|
}))
|
||||||
|
}
|
||||||
|
|
||||||
#[post("/organizations/<org_id>/users/<user_org>/reinvite")]
|
#[post("/organizations/<org_id>/users/<user_org>/reinvite")]
|
||||||
fn reinvite_user(org_id: String, user_org: String, headers: AdminHeaders, conn: DbConn) -> EmptyResult {
|
fn reinvite_user(org_id: String, user_org: String, headers: AdminHeaders, conn: DbConn) -> EmptyResult {
|
||||||
|
_reinvite_user(&org_id, &user_org, &headers.user.email, &conn)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn _reinvite_user(org_id: &str, user_org: &str, invited_by_email: &str, conn: &DbConn) -> EmptyResult {
|
||||||
if !CONFIG.invitations_allowed() {
|
if !CONFIG.invitations_allowed() {
|
||||||
err!("Invitations are not allowed.")
|
err!("Invitations are not allowed.")
|
||||||
}
|
}
|
||||||
@@ -624,7 +676,7 @@ fn reinvite_user(org_id: String, user_org: String, headers: AdminHeaders, conn:
|
|||||||
err!("SMTP is not configured.")
|
err!("SMTP is not configured.")
|
||||||
}
|
}
|
||||||
|
|
||||||
let user_org = match UserOrganization::find_by_uuid(&user_org, &conn) {
|
let user_org = match UserOrganization::find_by_uuid(user_org, conn) {
|
||||||
Some(user_org) => user_org,
|
Some(user_org) => user_org,
|
||||||
None => err!("The user hasn't been invited to the organization."),
|
None => err!("The user hasn't been invited to the organization."),
|
||||||
};
|
};
|
||||||
@@ -633,12 +685,12 @@ fn reinvite_user(org_id: String, user_org: String, headers: AdminHeaders, conn:
|
|||||||
err!("The user is already accepted or confirmed to the organization")
|
err!("The user is already accepted or confirmed to the organization")
|
||||||
}
|
}
|
||||||
|
|
||||||
let user = match User::find_by_uuid(&user_org.user_uuid, &conn) {
|
let user = match User::find_by_uuid(&user_org.user_uuid, conn) {
|
||||||
Some(user) => user,
|
Some(user) => user,
|
||||||
None => err!("User not found."),
|
None => err!("User not found."),
|
||||||
};
|
};
|
||||||
|
|
||||||
let org_name = match Organization::find_by_uuid(&org_id, &conn) {
|
let org_name = match Organization::find_by_uuid(org_id, conn) {
|
||||||
Some(org) => org.name,
|
Some(org) => org.name,
|
||||||
None => err!("Error looking up organization."),
|
None => err!("Error looking up organization."),
|
||||||
};
|
};
|
||||||
@@ -647,14 +699,14 @@ fn reinvite_user(org_id: String, user_org: String, headers: AdminHeaders, conn:
|
|||||||
mail::send_invite(
|
mail::send_invite(
|
||||||
&user.email,
|
&user.email,
|
||||||
&user.uuid,
|
&user.uuid,
|
||||||
Some(org_id),
|
Some(org_id.to_string()),
|
||||||
Some(user_org.uuid),
|
Some(user_org.uuid),
|
||||||
&org_name,
|
&org_name,
|
||||||
Some(headers.user.email),
|
Some(invited_by_email.to_string()),
|
||||||
)?;
|
)?;
|
||||||
} else {
|
} else {
|
||||||
let invitation = Invitation::new(user.email);
|
let invitation = Invitation::new(user.email);
|
||||||
invitation.save(&conn)?;
|
invitation.save(conn)?;
|
||||||
}
|
}
|
||||||
|
|
||||||
Ok(())
|
Ok(())
|
||||||
@@ -700,6 +752,30 @@ fn accept_invite(_org_id: String, _org_user_id: String, data: JsonUpcase<AcceptD
|
|||||||
err!("You cannot join this organization until you enable two-step login on your user account.")
|
err!("You cannot join this organization until you enable two-step login on your user account.")
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Enforce Single Organization Policy of organization user is trying to join
|
||||||
|
let single_org_policy_enabled =
|
||||||
|
match OrgPolicy::find_by_org_and_type(&user_org.org_uuid, OrgPolicyType::SingleOrg as i32, &conn) {
|
||||||
|
Some(p) => p.enabled,
|
||||||
|
None => false,
|
||||||
|
};
|
||||||
|
if single_org_policy_enabled && user_org.atype < UserOrgType::Admin {
|
||||||
|
let is_member_of_another_org = UserOrganization::find_any_state_by_user(&user_org.user_uuid, &conn)
|
||||||
|
.into_iter()
|
||||||
|
.filter(|uo| uo.org_uuid != user_org.org_uuid)
|
||||||
|
.count()
|
||||||
|
> 1;
|
||||||
|
if is_member_of_another_org {
|
||||||
|
err!("You may not join this organization until you leave or remove all other organizations.")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Enforce Single Organization Policy of other organizations user is a member of
|
||||||
|
if OrgPolicy::is_applicable_to_user(&user_org.user_uuid, OrgPolicyType::SingleOrg, &conn) {
|
||||||
|
err!(
|
||||||
|
"You cannot join this organization because you are a member of an organization which forbids it"
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
user_org.status = UserOrgStatus::Accepted as i32;
|
user_org.status = UserOrgStatus::Accepted as i32;
|
||||||
user_org.save(&conn)?;
|
user_org.save(&conn)?;
|
||||||
}
|
}
|
||||||
@@ -727,6 +803,40 @@ fn accept_invite(_org_id: String, _org_user_id: String, data: JsonUpcase<AcceptD
|
|||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[post("/organizations/<org_id>/users/confirm", data = "<data>")]
|
||||||
|
fn bulk_confirm_invite(org_id: String, data: JsonUpcase<Value>, headers: AdminHeaders, conn: DbConn) -> Json<Value> {
|
||||||
|
let data = data.into_inner().data;
|
||||||
|
|
||||||
|
let mut bulk_response = Vec::new();
|
||||||
|
match data["Keys"].as_array() {
|
||||||
|
Some(keys) => {
|
||||||
|
for invite in keys {
|
||||||
|
let org_user_id = invite["Id"].as_str().unwrap_or_default();
|
||||||
|
let user_key = invite["Key"].as_str().unwrap_or_default();
|
||||||
|
let err_msg = match _confirm_invite(&org_id, org_user_id, user_key, &headers, &conn) {
|
||||||
|
Ok(_) => String::from(""),
|
||||||
|
Err(e) => format!("{:?}", e),
|
||||||
|
};
|
||||||
|
|
||||||
|
bulk_response.push(json!(
|
||||||
|
{
|
||||||
|
"Object": "OrganizationBulkConfirmResponseModel",
|
||||||
|
"Id": org_user_id,
|
||||||
|
"Error": err_msg
|
||||||
|
}
|
||||||
|
));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
None => error!("No keys to confirm"),
|
||||||
|
}
|
||||||
|
|
||||||
|
Json(json!({
|
||||||
|
"Data": bulk_response,
|
||||||
|
"Object": "list",
|
||||||
|
"ContinuationToken": null
|
||||||
|
}))
|
||||||
|
}
|
||||||
|
|
||||||
#[post("/organizations/<org_id>/users/<org_user_id>/confirm", data = "<data>")]
|
#[post("/organizations/<org_id>/users/<org_user_id>/confirm", data = "<data>")]
|
||||||
fn confirm_invite(
|
fn confirm_invite(
|
||||||
org_id: String,
|
org_id: String,
|
||||||
@@ -736,8 +846,16 @@ fn confirm_invite(
|
|||||||
conn: DbConn,
|
conn: DbConn,
|
||||||
) -> EmptyResult {
|
) -> EmptyResult {
|
||||||
let data = data.into_inner().data;
|
let data = data.into_inner().data;
|
||||||
|
let user_key = data["Key"].as_str().unwrap_or_default();
|
||||||
|
_confirm_invite(&org_id, &org_user_id, user_key, &headers, &conn)
|
||||||
|
}
|
||||||
|
|
||||||
let mut user_to_confirm = match UserOrganization::find_by_uuid_and_org(&org_user_id, &org_id, &conn) {
|
fn _confirm_invite(org_id: &str, org_user_id: &str, key: &str, headers: &AdminHeaders, conn: &DbConn) -> EmptyResult {
|
||||||
|
if key.is_empty() || org_user_id.is_empty() {
|
||||||
|
err!("Key or UserId is not set, unable to process request");
|
||||||
|
}
|
||||||
|
|
||||||
|
let mut user_to_confirm = match UserOrganization::find_by_uuid_and_org(org_user_id, org_id, conn) {
|
||||||
Some(user) => user,
|
Some(user) => user,
|
||||||
None => err!("The specified user isn't a member of the organization"),
|
None => err!("The specified user isn't a member of the organization"),
|
||||||
};
|
};
|
||||||
@@ -751,24 +869,21 @@ fn confirm_invite(
|
|||||||
}
|
}
|
||||||
|
|
||||||
user_to_confirm.status = UserOrgStatus::Confirmed as i32;
|
user_to_confirm.status = UserOrgStatus::Confirmed as i32;
|
||||||
user_to_confirm.akey = match data["Key"].as_str() {
|
user_to_confirm.akey = key.to_string();
|
||||||
Some(key) => key.to_string(),
|
|
||||||
None => err!("Invalid key provided"),
|
|
||||||
};
|
|
||||||
|
|
||||||
if CONFIG.mail_enabled() {
|
if CONFIG.mail_enabled() {
|
||||||
let org_name = match Organization::find_by_uuid(&org_id, &conn) {
|
let org_name = match Organization::find_by_uuid(org_id, conn) {
|
||||||
Some(org) => org.name,
|
Some(org) => org.name,
|
||||||
None => err!("Error looking up organization."),
|
None => err!("Error looking up organization."),
|
||||||
};
|
};
|
||||||
let address = match User::find_by_uuid(&user_to_confirm.user_uuid, &conn) {
|
let address = match User::find_by_uuid(&user_to_confirm.user_uuid, conn) {
|
||||||
Some(user) => user.email,
|
Some(user) => user.email,
|
||||||
None => err!("Error looking up user."),
|
None => err!("Error looking up user."),
|
||||||
};
|
};
|
||||||
mail::send_invite_confirmed(&address, &org_name)?;
|
mail::send_invite_confirmed(&address, &org_name)?;
|
||||||
}
|
}
|
||||||
|
|
||||||
user_to_confirm.save(&conn)
|
user_to_confirm.save(conn)
|
||||||
}
|
}
|
||||||
|
|
||||||
#[get("/organizations/<org_id>/users/<org_user_id>")]
|
#[get("/organizations/<org_id>/users/<org_user_id>")]
|
||||||
@@ -869,9 +984,40 @@ fn edit_user(
|
|||||||
user_to_edit.save(&conn)
|
user_to_edit.save(&conn)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[delete("/organizations/<org_id>/users", data = "<data>")]
|
||||||
|
fn bulk_delete_user(org_id: String, data: JsonUpcase<OrgBulkIds>, headers: AdminHeaders, conn: DbConn) -> Json<Value> {
|
||||||
|
let data: OrgBulkIds = data.into_inner().data;
|
||||||
|
|
||||||
|
let mut bulk_response = Vec::new();
|
||||||
|
for org_user_id in data.Ids {
|
||||||
|
let err_msg = match _delete_user(&org_id, &org_user_id, &headers, &conn) {
|
||||||
|
Ok(_) => String::from(""),
|
||||||
|
Err(e) => format!("{:?}", e),
|
||||||
|
};
|
||||||
|
|
||||||
|
bulk_response.push(json!(
|
||||||
|
{
|
||||||
|
"Object": "OrganizationBulkConfirmResponseModel",
|
||||||
|
"Id": org_user_id,
|
||||||
|
"Error": err_msg
|
||||||
|
}
|
||||||
|
))
|
||||||
|
}
|
||||||
|
|
||||||
|
Json(json!({
|
||||||
|
"Data": bulk_response,
|
||||||
|
"Object": "list",
|
||||||
|
"ContinuationToken": null
|
||||||
|
}))
|
||||||
|
}
|
||||||
|
|
||||||
#[delete("/organizations/<org_id>/users/<org_user_id>")]
|
#[delete("/organizations/<org_id>/users/<org_user_id>")]
|
||||||
fn delete_user(org_id: String, org_user_id: String, headers: AdminHeaders, conn: DbConn) -> EmptyResult {
|
fn delete_user(org_id: String, org_user_id: String, headers: AdminHeaders, conn: DbConn) -> EmptyResult {
|
||||||
let user_to_delete = match UserOrganization::find_by_uuid_and_org(&org_user_id, &org_id, &conn) {
|
_delete_user(&org_id, &org_user_id, &headers, &conn)
|
||||||
|
}
|
||||||
|
|
||||||
|
fn _delete_user(org_id: &str, org_user_id: &str, headers: &AdminHeaders, conn: &DbConn) -> EmptyResult {
|
||||||
|
let user_to_delete = match UserOrganization::find_by_uuid_and_org(org_user_id, org_id, conn) {
|
||||||
Some(user) => user,
|
Some(user) => user,
|
||||||
None => err!("User to delete isn't member of the organization"),
|
None => err!("User to delete isn't member of the organization"),
|
||||||
};
|
};
|
||||||
@@ -882,14 +1028,14 @@ fn delete_user(org_id: String, org_user_id: String, headers: AdminHeaders, conn:
|
|||||||
|
|
||||||
if user_to_delete.atype == UserOrgType::Owner {
|
if user_to_delete.atype == UserOrgType::Owner {
|
||||||
// Removing owner, check that there are at least another owner
|
// Removing owner, check that there are at least another owner
|
||||||
let num_owners = UserOrganization::find_by_org_and_type(&org_id, UserOrgType::Owner as i32, &conn).len();
|
let num_owners = UserOrganization::find_by_org_and_type(org_id, UserOrgType::Owner as i32, conn).len();
|
||||||
|
|
||||||
if num_owners <= 1 {
|
if num_owners <= 1 {
|
||||||
err!("Can't delete the last owner")
|
err!("Can't delete the last owner")
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
user_to_delete.delete(&conn)
|
user_to_delete.delete(conn)
|
||||||
}
|
}
|
||||||
|
|
||||||
#[post("/organizations/<org_id>/users/<org_user_id>/delete")]
|
#[post("/organizations/<org_id>/users/<org_user_id>/delete")]
|
||||||
@@ -897,6 +1043,38 @@ fn post_delete_user(org_id: String, org_user_id: String, headers: AdminHeaders,
|
|||||||
delete_user(org_id, org_user_id, headers, conn)
|
delete_user(org_id, org_user_id, headers, conn)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[post("/organizations/<org_id>/users/public-keys", data = "<data>")]
|
||||||
|
fn bulk_public_keys(org_id: String, data: JsonUpcase<OrgBulkIds>, _headers: AdminHeaders, conn: DbConn) -> Json<Value> {
|
||||||
|
let data: OrgBulkIds = data.into_inner().data;
|
||||||
|
|
||||||
|
let mut bulk_response = Vec::new();
|
||||||
|
// Check all received UserOrg UUID's and find the matching User to retreive the public-key.
|
||||||
|
// If the user does not exists, just ignore it, and do not return any information regarding that UserOrg UUID.
|
||||||
|
// The web-vault will then ignore that user for the folowing steps.
|
||||||
|
for user_org_id in data.Ids {
|
||||||
|
match UserOrganization::find_by_uuid_and_org(&user_org_id, &org_id, &conn) {
|
||||||
|
Some(user_org) => match User::find_by_uuid(&user_org.user_uuid, &conn) {
|
||||||
|
Some(user) => bulk_response.push(json!(
|
||||||
|
{
|
||||||
|
"Object": "organizationUserPublicKeyResponseModel",
|
||||||
|
"Id": user_org_id,
|
||||||
|
"UserId": user.uuid,
|
||||||
|
"Key": user.public_key
|
||||||
|
}
|
||||||
|
)),
|
||||||
|
None => debug!("User doesn't exist"),
|
||||||
|
},
|
||||||
|
None => debug!("UserOrg doesn't exist"),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Json(json!({
|
||||||
|
"Data": bulk_response,
|
||||||
|
"Object": "list",
|
||||||
|
"ContinuationToken": null
|
||||||
|
}))
|
||||||
|
}
|
||||||
|
|
||||||
use super::ciphers::update_cipher_from_data;
|
use super::ciphers::update_cipher_from_data;
|
||||||
use super::ciphers::CipherData;
|
use super::ciphers::CipherData;
|
||||||
|
|
||||||
@@ -1034,7 +1212,7 @@ struct PolicyData {
|
|||||||
enabled: bool,
|
enabled: bool,
|
||||||
#[serde(rename = "type")]
|
#[serde(rename = "type")]
|
||||||
_type: i32,
|
_type: i32,
|
||||||
data: Value,
|
data: Option<Value>,
|
||||||
}
|
}
|
||||||
|
|
||||||
#[put("/organizations/<org_id>/policies/<pol_type>", data = "<data>")]
|
#[put("/organizations/<org_id>/policies/<pol_type>", data = "<data>")]
|
||||||
@@ -1052,20 +1230,52 @@ fn put_policy(
|
|||||||
None => err!("Invalid policy type"),
|
None => err!("Invalid policy type"),
|
||||||
};
|
};
|
||||||
|
|
||||||
|
// If enabling the TwoFactorAuthentication policy, remove this org's members that do have 2FA
|
||||||
if pol_type_enum == OrgPolicyType::TwoFactorAuthentication && data.enabled {
|
if pol_type_enum == OrgPolicyType::TwoFactorAuthentication && data.enabled {
|
||||||
let org_list = UserOrganization::find_by_org(&org_id, &conn);
|
let org_members = UserOrganization::find_by_org(&org_id, &conn);
|
||||||
|
|
||||||
for user_org in org_list.into_iter() {
|
for member in org_members.into_iter() {
|
||||||
let user_twofactor_disabled = TwoFactor::find_by_user(&user_org.user_uuid, &conn).is_empty();
|
let user_twofactor_disabled = TwoFactor::find_by_user(&member.user_uuid, &conn).is_empty();
|
||||||
|
|
||||||
if user_twofactor_disabled && user_org.atype < UserOrgType::Admin {
|
// Policy only applies to non-Owner/non-Admin members who have accepted joining the org
|
||||||
|
if user_twofactor_disabled
|
||||||
|
&& member.atype < UserOrgType::Admin
|
||||||
|
&& member.status != UserOrgStatus::Invited as i32
|
||||||
|
{
|
||||||
if CONFIG.mail_enabled() {
|
if CONFIG.mail_enabled() {
|
||||||
let org = Organization::find_by_uuid(&user_org.org_uuid, &conn).unwrap();
|
let org = Organization::find_by_uuid(&member.org_uuid, &conn).unwrap();
|
||||||
let user = User::find_by_uuid(&user_org.user_uuid, &conn).unwrap();
|
let user = User::find_by_uuid(&member.user_uuid, &conn).unwrap();
|
||||||
|
|
||||||
mail::send_2fa_removed_from_org(&user.email, &org.name)?;
|
mail::send_2fa_removed_from_org(&user.email, &org.name)?;
|
||||||
}
|
}
|
||||||
user_org.delete(&conn)?;
|
member.delete(&conn)?;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// If enabling the SingleOrg policy, remove this org's members that are members of other orgs
|
||||||
|
if pol_type_enum == OrgPolicyType::SingleOrg && data.enabled {
|
||||||
|
let org_members = UserOrganization::find_by_org(&org_id, &conn);
|
||||||
|
|
||||||
|
for member in org_members.into_iter() {
|
||||||
|
// Policy only applies to non-Owner/non-Admin members who have accepted joining the org
|
||||||
|
if member.atype < UserOrgType::Admin && member.status != UserOrgStatus::Invited as i32 {
|
||||||
|
let is_member_of_another_org = UserOrganization::find_any_state_by_user(&member.user_uuid, &conn)
|
||||||
|
.into_iter()
|
||||||
|
// Other UserOrganization's where they have accepted being a member of
|
||||||
|
.filter(|uo| uo.uuid != member.uuid && uo.status != UserOrgStatus::Invited as i32)
|
||||||
|
.count()
|
||||||
|
> 1;
|
||||||
|
|
||||||
|
if is_member_of_another_org {
|
||||||
|
if CONFIG.mail_enabled() {
|
||||||
|
let org = Organization::find_by_uuid(&member.org_uuid, &conn).unwrap();
|
||||||
|
let user = User::find_by_uuid(&member.user_uuid, &conn).unwrap();
|
||||||
|
|
||||||
|
mail::send_single_org_removed_from_org(&user.email, &org.name)?;
|
||||||
|
}
|
||||||
|
member.delete(&conn)?;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -1084,75 +1294,47 @@ fn put_policy(
|
|||||||
|
|
||||||
#[allow(unused_variables)]
|
#[allow(unused_variables)]
|
||||||
#[get("/organizations/<org_id>/tax")]
|
#[get("/organizations/<org_id>/tax")]
|
||||||
fn get_organization_tax(org_id: String, _headers: Headers, _conn: DbConn) -> EmptyResult {
|
fn get_organization_tax(org_id: String, _headers: Headers) -> Json<Value> {
|
||||||
// Prevent a 404 error, which also causes Javascript errors.
|
// Prevent a 404 error, which also causes Javascript errors.
|
||||||
err!("Only allowed when not self hosted.")
|
// Upstream sends "Only allowed when not self hosted." As an error message.
|
||||||
|
// If we do the same it will also output this to the log, which is overkill.
|
||||||
|
// An empty list/data also works fine.
|
||||||
|
Json(_empty_data_json())
|
||||||
}
|
}
|
||||||
|
|
||||||
#[get("/plans")]
|
#[get("/plans")]
|
||||||
fn get_plans(_headers: Headers, _conn: DbConn) -> Json<Value> {
|
fn get_plans(_headers: Headers) -> Json<Value> {
|
||||||
|
// Respond with a minimal json just enough to allow the creation of an new organization.
|
||||||
Json(json!({
|
Json(json!({
|
||||||
"Object": "list",
|
"Object": "list",
|
||||||
"Data": [
|
"Data": [{
|
||||||
{
|
|
||||||
"Object": "plan",
|
"Object": "plan",
|
||||||
"Type": 0,
|
"Type": 0,
|
||||||
"Product": 0,
|
"Product": 0,
|
||||||
"Name": "Free",
|
"Name": "Free",
|
||||||
"IsAnnual": false,
|
|
||||||
"NameLocalizationKey": "planNameFree",
|
"NameLocalizationKey": "planNameFree",
|
||||||
"DescriptionLocalizationKey": "planDescFree",
|
"DescriptionLocalizationKey": "planDescFree"
|
||||||
"CanBeUsedByBusiness": false,
|
}],
|
||||||
"BaseSeats": 2,
|
|
||||||
"BaseStorageGb": null,
|
|
||||||
"MaxCollections": 2,
|
|
||||||
"MaxUsers": 2,
|
|
||||||
"HasAdditionalSeatsOption": false,
|
|
||||||
"MaxAdditionalSeats": null,
|
|
||||||
"HasAdditionalStorageOption": false,
|
|
||||||
"MaxAdditionalStorage": null,
|
|
||||||
"HasPremiumAccessOption": false,
|
|
||||||
"TrialPeriodDays": null,
|
|
||||||
"HasSelfHost": false,
|
|
||||||
"HasPolicies": false,
|
|
||||||
"HasGroups": false,
|
|
||||||
"HasDirectory": false,
|
|
||||||
"HasEvents": false,
|
|
||||||
"HasTotp": false,
|
|
||||||
"Has2fa": false,
|
|
||||||
"HasApi": false,
|
|
||||||
"HasSso": false,
|
|
||||||
"UsersGetPremium": false,
|
|
||||||
"UpgradeSortOrder": -1,
|
|
||||||
"DisplaySortOrder": -1,
|
|
||||||
"LegacyYear": null,
|
|
||||||
"Disabled": false,
|
|
||||||
"StripePlanId": null,
|
|
||||||
"StripeSeatPlanId": null,
|
|
||||||
"StripeStoragePlanId": null,
|
|
||||||
"StripePremiumAccessPlanId": null,
|
|
||||||
"BasePrice": 0.0,
|
|
||||||
"SeatPrice": 0.0,
|
|
||||||
"AdditionalStoragePricePerGb": 0.0,
|
|
||||||
"PremiumAccessOptionPrice": 0.0
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"ContinuationToken": null
|
"ContinuationToken": null
|
||||||
}))
|
}))
|
||||||
}
|
}
|
||||||
|
|
||||||
#[get("/plans/sales-tax-rates")]
|
#[get("/plans/sales-tax-rates")]
|
||||||
fn get_plans_tax_rates(_headers: Headers, _conn: DbConn) -> Json<Value> {
|
fn get_plans_tax_rates(_headers: Headers) -> Json<Value> {
|
||||||
// Prevent a 404 error, which also causes Javascript errors.
|
// Prevent a 404 error, which also causes Javascript errors.
|
||||||
Json(json!({
|
Json(_empty_data_json())
|
||||||
|
}
|
||||||
|
|
||||||
|
fn _empty_data_json() -> Value {
|
||||||
|
json!({
|
||||||
"Object": "list",
|
"Object": "list",
|
||||||
"Data": [],
|
"Data": [],
|
||||||
"ContinuationToken": null
|
"ContinuationToken": null
|
||||||
}))
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Deserialize, Debug)]
|
#[derive(Deserialize, Debug)]
|
||||||
#[allow(non_snake_case)]
|
#[allow(non_snake_case, dead_code)]
|
||||||
struct OrgImportGroupData {
|
struct OrgImportGroupData {
|
||||||
Name: String, // "GroupName"
|
Name: String, // "GroupName"
|
||||||
ExternalId: String, // "cn=GroupName,ou=Groups,dc=example,dc=com"
|
ExternalId: String, // "cn=GroupName,ou=Groups,dc=example,dc=com"
|
||||||
@@ -1163,6 +1345,7 @@ struct OrgImportGroupData {
|
|||||||
#[allow(non_snake_case)]
|
#[allow(non_snake_case)]
|
||||||
struct OrgImportUserData {
|
struct OrgImportUserData {
|
||||||
Email: String, // "user@maildomain.net"
|
Email: String, // "user@maildomain.net"
|
||||||
|
#[allow(dead_code)]
|
||||||
ExternalId: String, // "uid=user,ou=People,dc=example,dc=com"
|
ExternalId: String, // "uid=user,ou=People,dc=example,dc=com"
|
||||||
Deleted: bool,
|
Deleted: bool,
|
||||||
}
|
}
|
||||||
@@ -1170,6 +1353,7 @@ struct OrgImportUserData {
|
|||||||
#[derive(Deserialize, Debug)]
|
#[derive(Deserialize, Debug)]
|
||||||
#[allow(non_snake_case)]
|
#[allow(non_snake_case)]
|
||||||
struct OrgImportData {
|
struct OrgImportData {
|
||||||
|
#[allow(dead_code)]
|
||||||
Groups: Vec<OrgImportGroupData>,
|
Groups: Vec<OrgImportGroupData>,
|
||||||
OverwriteExisting: bool,
|
OverwriteExisting: bool,
|
||||||
Users: Vec<OrgImportUserData>,
|
Users: Vec<OrgImportUserData>,
|
||||||
|
|||||||
@@ -18,6 +18,8 @@ const SEND_INACCESSIBLE_MSG: &str = "Send does not exist or is no longer availab
|
|||||||
|
|
||||||
pub fn routes() -> Vec<rocket::Route> {
|
pub fn routes() -> Vec<rocket::Route> {
|
||||||
routes![
|
routes![
|
||||||
|
get_sends,
|
||||||
|
get_send,
|
||||||
post_send,
|
post_send,
|
||||||
post_send_file,
|
post_send_file,
|
||||||
post_access,
|
post_access,
|
||||||
@@ -128,6 +130,32 @@ fn create_send(data: SendData, user_uuid: String) -> ApiResult<Send> {
|
|||||||
Ok(send)
|
Ok(send)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[get("/sends")]
|
||||||
|
fn get_sends(headers: Headers, conn: DbConn) -> Json<Value> {
|
||||||
|
let sends = Send::find_by_user(&headers.user.uuid, &conn);
|
||||||
|
let sends_json: Vec<Value> = sends.iter().map(|s| s.to_json()).collect();
|
||||||
|
|
||||||
|
Json(json!({
|
||||||
|
"Data": sends_json,
|
||||||
|
"Object": "list",
|
||||||
|
"ContinuationToken": null
|
||||||
|
}))
|
||||||
|
}
|
||||||
|
|
||||||
|
#[get("/sends/<uuid>")]
|
||||||
|
fn get_send(uuid: String, headers: Headers, conn: DbConn) -> JsonResult {
|
||||||
|
let send = match Send::find_by_uuid(&uuid, &conn) {
|
||||||
|
Some(send) => send,
|
||||||
|
None => err!("Send not found"),
|
||||||
|
};
|
||||||
|
|
||||||
|
if send.user_uuid.as_ref() != Some(&headers.user.uuid) {
|
||||||
|
err!("Send is not owned by user")
|
||||||
|
}
|
||||||
|
|
||||||
|
Ok(Json(send.to_json()))
|
||||||
|
}
|
||||||
|
|
||||||
#[post("/sends", data = "<data>")]
|
#[post("/sends", data = "<data>")]
|
||||||
fn post_send(data: JsonUpcase<SendData>, headers: Headers, conn: DbConn, nt: Notify) -> JsonResult {
|
fn post_send(data: JsonUpcase<SendData>, headers: Headers, conn: DbConn, nt: Notify) -> JsonResult {
|
||||||
enforce_disable_send_policy(&headers, &conn)?;
|
enforce_disable_send_policy(&headers, &conn)?;
|
||||||
@@ -139,9 +167,9 @@ fn post_send(data: JsonUpcase<SendData>, headers: Headers, conn: DbConn, nt: Not
|
|||||||
err!("File sends should use /api/sends/file")
|
err!("File sends should use /api/sends/file")
|
||||||
}
|
}
|
||||||
|
|
||||||
let mut send = create_send(data, headers.user.uuid.clone())?;
|
let mut send = create_send(data, headers.user.uuid)?;
|
||||||
send.save(&conn)?;
|
send.save(&conn)?;
|
||||||
nt.send_user_update(UpdateType::SyncSendCreate, &headers.user);
|
nt.send_send_update(UpdateType::SyncSendCreate, &send, &send.update_users_revision(&conn));
|
||||||
|
|
||||||
Ok(Json(send.to_json()))
|
Ok(Json(send.to_json()))
|
||||||
}
|
}
|
||||||
@@ -182,7 +210,7 @@ fn post_send_file(data: Data, content_type: &ContentType, headers: Headers, conn
|
|||||||
};
|
};
|
||||||
|
|
||||||
// Create the Send
|
// Create the Send
|
||||||
let mut send = create_send(data.data, headers.user.uuid.clone())?;
|
let mut send = create_send(data.data, headers.user.uuid)?;
|
||||||
let file_id = crate::crypto::generate_send_id();
|
let file_id = crate::crypto::generate_send_id();
|
||||||
|
|
||||||
if send.atype != SendType::File as i32 {
|
if send.atype != SendType::File as i32 {
|
||||||
@@ -225,7 +253,7 @@ fn post_send_file(data: Data, content_type: &ContentType, headers: Headers, conn
|
|||||||
|
|
||||||
// Save the changes in the database
|
// Save the changes in the database
|
||||||
send.save(&conn)?;
|
send.save(&conn)?;
|
||||||
nt.send_user_update(UpdateType::SyncSendCreate, &headers.user);
|
nt.send_send_update(UpdateType::SyncSendUpdate, &send, &send.update_users_revision(&conn));
|
||||||
|
|
||||||
Ok(Json(send.to_json()))
|
Ok(Json(send.to_json()))
|
||||||
}
|
}
|
||||||
@@ -397,7 +425,7 @@ fn put_send(id: String, data: JsonUpcase<SendData>, headers: Headers, conn: DbCo
|
|||||||
}
|
}
|
||||||
|
|
||||||
send.save(&conn)?;
|
send.save(&conn)?;
|
||||||
nt.send_user_update(UpdateType::SyncSendUpdate, &headers.user);
|
nt.send_send_update(UpdateType::SyncSendUpdate, &send, &send.update_users_revision(&conn));
|
||||||
|
|
||||||
Ok(Json(send.to_json()))
|
Ok(Json(send.to_json()))
|
||||||
}
|
}
|
||||||
@@ -414,7 +442,7 @@ fn delete_send(id: String, headers: Headers, conn: DbConn, nt: Notify) -> EmptyR
|
|||||||
}
|
}
|
||||||
|
|
||||||
send.delete(&conn)?;
|
send.delete(&conn)?;
|
||||||
nt.send_user_update(UpdateType::SyncSendDelete, &headers.user);
|
nt.send_send_update(UpdateType::SyncSendDelete, &send, &send.update_users_revision(&conn));
|
||||||
|
|
||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
@@ -434,7 +462,7 @@ fn put_remove_password(id: String, headers: Headers, conn: DbConn, nt: Notify) -
|
|||||||
|
|
||||||
send.set_password(None);
|
send.set_password(None);
|
||||||
send.save(&conn)?;
|
send.save(&conn)?;
|
||||||
nt.send_user_update(UpdateType::SyncSendUpdate, &headers.user);
|
nt.send_send_update(UpdateType::SyncSendUpdate, &send, &send.update_users_revision(&conn));
|
||||||
|
|
||||||
Ok(Json(send.to_json()))
|
Ok(Json(send.to_json()))
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -62,7 +62,7 @@ fn activate_authenticator(
|
|||||||
let data: EnableAuthenticatorData = data.into_inner().data;
|
let data: EnableAuthenticatorData = data.into_inner().data;
|
||||||
let password_hash = data.MasterPasswordHash;
|
let password_hash = data.MasterPasswordHash;
|
||||||
let key = data.Key;
|
let key = data.Key;
|
||||||
let token = data.Token.into_i32()? as u64;
|
let token = data.Token.into_string();
|
||||||
|
|
||||||
let mut user = headers.user;
|
let mut user = headers.user;
|
||||||
|
|
||||||
@@ -81,7 +81,7 @@ fn activate_authenticator(
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Validate the token provided with the key, and save new twofactor
|
// Validate the token provided with the key, and save new twofactor
|
||||||
validate_totp_code(&user.uuid, token, &key.to_uppercase(), &ip, &conn)?;
|
validate_totp_code(&user.uuid, &token, &key.to_uppercase(), &ip, &conn)?;
|
||||||
|
|
||||||
_generate_recover_code(&mut user, &conn);
|
_generate_recover_code(&mut user, &conn);
|
||||||
|
|
||||||
@@ -109,16 +109,15 @@ pub fn validate_totp_code_str(
|
|||||||
ip: &ClientIp,
|
ip: &ClientIp,
|
||||||
conn: &DbConn,
|
conn: &DbConn,
|
||||||
) -> EmptyResult {
|
) -> EmptyResult {
|
||||||
let totp_code: u64 = match totp_code.parse() {
|
if !totp_code.chars().all(char::is_numeric) {
|
||||||
Ok(code) => code,
|
err!("TOTP code is not a number");
|
||||||
_ => err!("TOTP code is not a number"),
|
}
|
||||||
};
|
|
||||||
|
|
||||||
validate_totp_code(user_uuid, totp_code, secret, ip, conn)
|
validate_totp_code(user_uuid, totp_code, secret, ip, conn)
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn validate_totp_code(user_uuid: &str, totp_code: u64, secret: &str, ip: &ClientIp, conn: &DbConn) -> EmptyResult {
|
pub fn validate_totp_code(user_uuid: &str, totp_code: &str, secret: &str, ip: &ClientIp, conn: &DbConn) -> EmptyResult {
|
||||||
use oath::{totp_raw_custom_time, HashType};
|
use totp_lite::{totp_custom, Sha1};
|
||||||
|
|
||||||
let decoded_secret = match BASE32.decode(secret.as_bytes()) {
|
let decoded_secret = match BASE32.decode(secret.as_bytes()) {
|
||||||
Ok(s) => s,
|
Ok(s) => s,
|
||||||
@@ -130,27 +129,28 @@ pub fn validate_totp_code(user_uuid: &str, totp_code: u64, secret: &str, ip: &Cl
|
|||||||
_ => TwoFactor::new(user_uuid.to_string(), TwoFactorType::Authenticator, secret.to_string()),
|
_ => TwoFactor::new(user_uuid.to_string(), TwoFactorType::Authenticator, secret.to_string()),
|
||||||
};
|
};
|
||||||
|
|
||||||
// Get the current system time in UNIX Epoch (UTC)
|
|
||||||
let current_time = chrono::Utc::now();
|
|
||||||
let current_timestamp = current_time.timestamp();
|
|
||||||
|
|
||||||
// The amount of steps back and forward in time
|
// The amount of steps back and forward in time
|
||||||
// Also check if we need to disable time drifted TOTP codes.
|
// Also check if we need to disable time drifted TOTP codes.
|
||||||
// If that is the case, we set the steps to 0 so only the current TOTP is valid.
|
// If that is the case, we set the steps to 0 so only the current TOTP is valid.
|
||||||
let steps = !CONFIG.authenticator_disable_time_drift() as i64;
|
let steps = !CONFIG.authenticator_disable_time_drift() as i64;
|
||||||
|
|
||||||
|
// Get the current system time in UNIX Epoch (UTC)
|
||||||
|
let current_time = chrono::Utc::now();
|
||||||
|
let current_timestamp = current_time.timestamp();
|
||||||
|
|
||||||
for step in -steps..=steps {
|
for step in -steps..=steps {
|
||||||
let time_step = current_timestamp / 30i64 + step;
|
let time_step = current_timestamp / 30i64 + step;
|
||||||
// We need to calculate the time offsite and cast it as an i128.
|
|
||||||
// Else we can't do math with it on a default u64 variable.
|
// We need to calculate the time offsite and cast it as an u64.
|
||||||
|
// Since we only have times into the future and the totp generator needs an u64 instead of the default i64.
|
||||||
let time = (current_timestamp + step * 30i64) as u64;
|
let time = (current_timestamp + step * 30i64) as u64;
|
||||||
let generated = totp_raw_custom_time(&decoded_secret, 6, 0, 30, time, &HashType::SHA1);
|
let generated = totp_custom::<Sha1>(30, 6, &decoded_secret, time);
|
||||||
|
|
||||||
// Check the the given code equals the generated and if the time_step is larger then the one last used.
|
// Check the the given code equals the generated and if the time_step is larger then the one last used.
|
||||||
if generated == totp_code && time_step > twofactor.last_used as i64 {
|
if generated == totp_code && time_step > twofactor.last_used as i64 {
|
||||||
// If the step does not equals 0 the time is drifted either server or client side.
|
// If the step does not equals 0 the time is drifted either server or client side.
|
||||||
if step != 0 {
|
if step != 0 {
|
||||||
info!("TOTP Time drift detected. The step offset is {}", step);
|
warn!("TOTP Time drift detected. The step offset is {}", step);
|
||||||
}
|
}
|
||||||
|
|
||||||
// Save the last used time step so only totp time steps higher then this one are allowed.
|
// Save the last used time step so only totp time steps higher then this one are allowed.
|
||||||
@@ -159,7 +159,7 @@ pub fn validate_totp_code(user_uuid: &str, totp_code: u64, secret: &str, ip: &Cl
|
|||||||
twofactor.save(conn)?;
|
twofactor.save(conn)?;
|
||||||
return Ok(());
|
return Ok(());
|
||||||
} else if generated == totp_code && time_step <= twofactor.last_used as i64 {
|
} else if generated == totp_code && time_step <= twofactor.last_used as i64 {
|
||||||
warn!("This or a TOTP code within {} steps back and forward has already been used!", steps);
|
warn!("This TOTP or a TOTP code within {} steps back or forward has already been used!", steps);
|
||||||
err!(format!("Invalid TOTP code! Server time: {} IP: {}", current_time.format("%F %T UTC"), ip.ip));
|
err!(format!("Invalid TOTP code! Server time: {} IP: {}", current_time.format("%F %T UTC"), ip.ip));
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -80,14 +80,16 @@ fn get_email(data: JsonUpcase<PasswordData>, headers: Headers, conn: DbConn) ->
|
|||||||
err!("Invalid password");
|
err!("Invalid password");
|
||||||
}
|
}
|
||||||
|
|
||||||
let type_ = TwoFactorType::Email as i32;
|
let (enabled, mfa_email) = match TwoFactor::find_by_user_and_type(&user.uuid, TwoFactorType::Email as i32, &conn) {
|
||||||
let enabled = match TwoFactor::find_by_user_and_type(&user.uuid, type_, &conn) {
|
Some(x) => {
|
||||||
Some(x) => x.enabled,
|
let twofactor_data = EmailTokenData::from_json(&x.data)?;
|
||||||
_ => false,
|
(true, json!(twofactor_data.email))
|
||||||
|
}
|
||||||
|
_ => (false, json!(null)),
|
||||||
};
|
};
|
||||||
|
|
||||||
Ok(Json(json!({
|
Ok(Json(json!({
|
||||||
"Email": user.email,
|
"Email": mfa_email,
|
||||||
"Enabled": enabled,
|
"Enabled": enabled,
|
||||||
"Object": "twoFactorEmail"
|
"Object": "twoFactorEmail"
|
||||||
})))
|
})))
|
||||||
|
|||||||
@@ -1,3 +1,4 @@
|
|||||||
|
use chrono::{Duration, Utc};
|
||||||
use data_encoding::BASE32;
|
use data_encoding::BASE32;
|
||||||
use rocket::Route;
|
use rocket::Route;
|
||||||
use rocket_contrib::json::Json;
|
use rocket_contrib::json::Json;
|
||||||
@@ -7,7 +8,7 @@ use crate::{
|
|||||||
api::{JsonResult, JsonUpcase, NumberOrString, PasswordData},
|
api::{JsonResult, JsonUpcase, NumberOrString, PasswordData},
|
||||||
auth::Headers,
|
auth::Headers,
|
||||||
crypto,
|
crypto,
|
||||||
db::{models::*, DbConn},
|
db::{models::*, DbConn, DbPool},
|
||||||
mail, CONFIG,
|
mail, CONFIG,
|
||||||
};
|
};
|
||||||
|
|
||||||
@@ -156,3 +157,33 @@ fn disable_twofactor(data: JsonUpcase<DisableTwoFactorData>, headers: Headers, c
|
|||||||
fn disable_twofactor_put(data: JsonUpcase<DisableTwoFactorData>, headers: Headers, conn: DbConn) -> JsonResult {
|
fn disable_twofactor_put(data: JsonUpcase<DisableTwoFactorData>, headers: Headers, conn: DbConn) -> JsonResult {
|
||||||
disable_twofactor(data, headers, conn)
|
disable_twofactor(data, headers, conn)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
pub fn send_incomplete_2fa_notifications(pool: DbPool) {
|
||||||
|
debug!("Sending notifications for incomplete 2FA logins");
|
||||||
|
|
||||||
|
if CONFIG.incomplete_2fa_time_limit() <= 0 || !CONFIG.mail_enabled() {
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
let conn = match pool.get() {
|
||||||
|
Ok(conn) => conn,
|
||||||
|
_ => {
|
||||||
|
error!("Failed to get DB connection in send_incomplete_2fa_notifications()");
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
let now = Utc::now().naive_utc();
|
||||||
|
let time_limit = Duration::minutes(CONFIG.incomplete_2fa_time_limit());
|
||||||
|
let incomplete_logins = TwoFactorIncomplete::find_logins_before(&(now - time_limit), &conn);
|
||||||
|
for login in incomplete_logins {
|
||||||
|
let user = User::find_by_uuid(&login.user_uuid, &conn).expect("User not found");
|
||||||
|
info!(
|
||||||
|
"User {} did not complete a 2FA login within the configured time limit. IP: {}",
|
||||||
|
user.email, login.ip_address
|
||||||
|
);
|
||||||
|
mail::send_incomplete_2fa_login(&user.email, &login.ip_address, &login.login_time, &login.device_name)
|
||||||
|
.expect("Error sending incomplete 2FA email");
|
||||||
|
login.delete(&conn).expect("Error deleting incomplete 2FA record");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|||||||
@@ -1,6 +1,7 @@
|
|||||||
use rocket::Route;
|
use rocket::Route;
|
||||||
use rocket_contrib::json::Json;
|
use rocket_contrib::json::Json;
|
||||||
use serde_json::Value;
|
use serde_json::Value;
|
||||||
|
use url::Url;
|
||||||
use webauthn_rs::{base64_data::Base64UrlSafeData, proto::*, AuthenticationState, RegistrationState, Webauthn};
|
use webauthn_rs::{base64_data::Base64UrlSafeData, proto::*, AuthenticationState, RegistrationState, Webauthn};
|
||||||
|
|
||||||
use crate::{
|
use crate::{
|
||||||
@@ -22,19 +23,18 @@ pub fn routes() -> Vec<Route> {
|
|||||||
|
|
||||||
struct WebauthnConfig {
|
struct WebauthnConfig {
|
||||||
url: String,
|
url: String,
|
||||||
|
origin: Url,
|
||||||
rpid: String,
|
rpid: String,
|
||||||
}
|
}
|
||||||
|
|
||||||
impl WebauthnConfig {
|
impl WebauthnConfig {
|
||||||
fn load() -> Webauthn<Self> {
|
fn load() -> Webauthn<Self> {
|
||||||
let domain = CONFIG.domain();
|
let domain = CONFIG.domain();
|
||||||
|
let domain_origin = CONFIG.domain_origin();
|
||||||
Webauthn::new(Self {
|
Webauthn::new(Self {
|
||||||
rpid: reqwest::Url::parse(&domain)
|
rpid: Url::parse(&domain).map(|u| u.domain().map(str::to_owned)).ok().flatten().unwrap_or_default(),
|
||||||
.map(|u| u.domain().map(str::to_owned))
|
|
||||||
.ok()
|
|
||||||
.flatten()
|
|
||||||
.unwrap_or_default(),
|
|
||||||
url: domain,
|
url: domain,
|
||||||
|
origin: Url::parse(&domain_origin).unwrap(),
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -44,8 +44,8 @@ impl webauthn_rs::WebauthnConfig for WebauthnConfig {
|
|||||||
&self.url
|
&self.url
|
||||||
}
|
}
|
||||||
|
|
||||||
fn get_origin(&self) -> &str {
|
fn get_origin(&self) -> &Url {
|
||||||
&self.url
|
&self.origin
|
||||||
}
|
}
|
||||||
|
|
||||||
fn get_relying_party_id(&self) -> &str {
|
fn get_relying_party_id(&self) -> &str {
|
||||||
|
|||||||
@@ -250,7 +250,7 @@ fn is_domain_blacklisted(domain: &str) -> bool {
|
|||||||
|
|
||||||
// Use the pre-generate Regex stored in a Lazy HashMap.
|
// Use the pre-generate Regex stored in a Lazy HashMap.
|
||||||
if regex.is_match(domain) {
|
if regex.is_match(domain) {
|
||||||
warn!("Blacklisted domain: {:#?} matched {:#?}", domain, blacklist);
|
warn!("Blacklisted domain: {} matched ICON_BLACKLIST_REGEX", domain);
|
||||||
is_blacklisted = true;
|
is_blacklisted = true;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -555,7 +555,7 @@ fn get_page(url: &str) -> Result<Response, Error> {
|
|||||||
|
|
||||||
fn get_page_with_referer(url: &str, referer: &str) -> Result<Response, Error> {
|
fn get_page_with_referer(url: &str, referer: &str) -> Result<Response, Error> {
|
||||||
if is_domain_blacklisted(url::Url::parse(url).unwrap().host_str().unwrap_or_default()) {
|
if is_domain_blacklisted(url::Url::parse(url).unwrap().host_str().unwrap_or_default()) {
|
||||||
err!("Favicon rel linked to a blacklisted domain!");
|
err!("Favicon resolves to a blacklisted domain or IP!", url);
|
||||||
}
|
}
|
||||||
|
|
||||||
let mut client = CLIENT.get(url);
|
let mut client = CLIENT.get(url);
|
||||||
@@ -563,7 +563,10 @@ fn get_page_with_referer(url: &str, referer: &str) -> Result<Response, Error> {
|
|||||||
client = client.header("Referer", referer)
|
client = client.header("Referer", referer)
|
||||||
}
|
}
|
||||||
|
|
||||||
client.send()?.error_for_status().map_err(Into::into)
|
match client.send() {
|
||||||
|
Ok(c) => c.error_for_status().map_err(Into::into),
|
||||||
|
Err(e) => err_silent!(format!("{}", e)),
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Returns a Integer with the priority of the type of the icon which to prefer.
|
/// Returns a Integer with the priority of the type of the icon which to prefer.
|
||||||
@@ -647,7 +650,7 @@ fn parse_sizes(sizes: Option<&str>) -> (u16, u16) {
|
|||||||
|
|
||||||
fn download_icon(domain: &str) -> Result<(Vec<u8>, Option<&str>), Error> {
|
fn download_icon(domain: &str) -> Result<(Vec<u8>, Option<&str>), Error> {
|
||||||
if is_domain_blacklisted(domain) {
|
if is_domain_blacklisted(domain) {
|
||||||
err!("Domain is blacklisted", domain)
|
err_silent!("Domain is blacklisted", domain)
|
||||||
}
|
}
|
||||||
|
|
||||||
let icon_result = get_icon_url(domain)?;
|
let icon_result = get_icon_url(domain)?;
|
||||||
@@ -676,7 +679,7 @@ fn download_icon(domain: &str) -> Result<(Vec<u8>, Option<&str>), Error> {
|
|||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
_ => warn!("Extracted icon from data:image uri is invalid"),
|
_ => debug!("Extracted icon from data:image uri is invalid"),
|
||||||
};
|
};
|
||||||
} else {
|
} else {
|
||||||
match get_page_with_referer(&icon.href, &icon_result.referer) {
|
match get_page_with_referer(&icon.href, &icon_result.referer) {
|
||||||
@@ -692,13 +695,13 @@ fn download_icon(domain: &str) -> Result<(Vec<u8>, Option<&str>), Error> {
|
|||||||
info!("Downloaded icon from {}", icon.href);
|
info!("Downloaded icon from {}", icon.href);
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
_ => warn!("Download failed for {}", icon.href),
|
Err(e) => debug!("{:?}", e),
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if buffer.is_empty() {
|
if buffer.is_empty() {
|
||||||
err!("Empty response downloading icon")
|
err_silent!("Empty response or unable find a valid icon", domain);
|
||||||
}
|
}
|
||||||
|
|
||||||
Ok((buffer, icon_type))
|
Ok((buffer, icon_type))
|
||||||
|
|||||||
@@ -1,4 +1,4 @@
|
|||||||
use chrono::Local;
|
use chrono::Utc;
|
||||||
use num_traits::FromPrimitive;
|
use num_traits::FromPrimitive;
|
||||||
use rocket::{
|
use rocket::{
|
||||||
request::{Form, FormItems, FromForm},
|
request::{Form, FormItems, FromForm},
|
||||||
@@ -56,7 +56,7 @@ fn _refresh_login(data: ConnectData, conn: DbConn) -> JsonResult {
|
|||||||
|
|
||||||
// COMMON
|
// COMMON
|
||||||
let user = User::find_by_uuid(&device.user_uuid, &conn).unwrap();
|
let user = User::find_by_uuid(&device.user_uuid, &conn).unwrap();
|
||||||
let orgs = UserOrganization::find_by_user(&user.uuid, &conn);
|
let orgs = UserOrganization::find_confirmed_by_user(&user.uuid, &conn);
|
||||||
|
|
||||||
let (access_token, expires_in) = device.refresh_tokens(&user, orgs);
|
let (access_token, expires_in) = device.refresh_tokens(&user, orgs);
|
||||||
|
|
||||||
@@ -102,10 +102,9 @@ fn _password_login(data: ConnectData, conn: DbConn, ip: &ClientIp) -> JsonResult
|
|||||||
err!("This user has been disabled", format!("IP: {}. Username: {}.", ip.ip, username))
|
err!("This user has been disabled", format!("IP: {}. Username: {}.", ip.ip, username))
|
||||||
}
|
}
|
||||||
|
|
||||||
let now = Local::now();
|
let now = Utc::now().naive_utc();
|
||||||
|
|
||||||
if user.verified_at.is_none() && CONFIG.mail_enabled() && CONFIG.signups_verify() {
|
if user.verified_at.is_none() && CONFIG.mail_enabled() && CONFIG.signups_verify() {
|
||||||
let now = now.naive_utc();
|
|
||||||
if user.last_verifying_at.is_none()
|
if user.last_verifying_at.is_none()
|
||||||
|| now.signed_duration_since(user.last_verifying_at.unwrap()).num_seconds()
|
|| now.signed_duration_since(user.last_verifying_at.unwrap()).num_seconds()
|
||||||
> CONFIG.signups_verify_resend_time() as i64
|
> CONFIG.signups_verify_resend_time() as i64
|
||||||
@@ -147,7 +146,7 @@ fn _password_login(data: ConnectData, conn: DbConn, ip: &ClientIp) -> JsonResult
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Common
|
// Common
|
||||||
let orgs = UserOrganization::find_by_user(&user.uuid, &conn);
|
let orgs = UserOrganization::find_confirmed_by_user(&user.uuid, &conn);
|
||||||
|
|
||||||
let (access_token, expires_in) = device.refresh_tokens(&user, orgs);
|
let (access_token, expires_in) = device.refresh_tokens(&user, orgs);
|
||||||
device.save(&conn)?;
|
device.save(&conn)?;
|
||||||
@@ -219,6 +218,8 @@ fn twofactor_auth(
|
|||||||
return Ok(None);
|
return Ok(None);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
TwoFactorIncomplete::mark_incomplete(user_uuid, &device.uuid, &device.name, ip, conn)?;
|
||||||
|
|
||||||
let twofactor_ids: Vec<_> = twofactors.iter().map(|tf| tf.atype).collect();
|
let twofactor_ids: Vec<_> = twofactors.iter().map(|tf| tf.atype).collect();
|
||||||
let selected_id = data.two_factor_provider.unwrap_or(twofactor_ids[0]); // If we aren't given a two factor provider, asume the first one
|
let selected_id = data.two_factor_provider.unwrap_or(twofactor_ids[0]); // If we aren't given a two factor provider, asume the first one
|
||||||
|
|
||||||
@@ -262,6 +263,8 @@ fn twofactor_auth(
|
|||||||
_ => err!("Invalid two factor provider"),
|
_ => err!("Invalid two factor provider"),
|
||||||
}
|
}
|
||||||
|
|
||||||
|
TwoFactorIncomplete::mark_complete(user_uuid, &device.uuid, conn)?;
|
||||||
|
|
||||||
if !CONFIG.disable_2fa_remember() && remember == 1 {
|
if !CONFIG.disable_2fa_remember() && remember == 1 {
|
||||||
Ok(Some(device.refresh_twofactor_remember()))
|
Ok(Some(device.refresh_twofactor_remember()))
|
||||||
} else {
|
} else {
|
||||||
|
|||||||
@@ -13,6 +13,8 @@ pub use crate::api::{
|
|||||||
core::purge_sends,
|
core::purge_sends,
|
||||||
core::purge_trashed_ciphers,
|
core::purge_trashed_ciphers,
|
||||||
core::routes as core_routes,
|
core::routes as core_routes,
|
||||||
|
core::two_factor::send_incomplete_2fa_notifications,
|
||||||
|
core::{emergency_notification_reminder_job, emergency_request_timeout_job},
|
||||||
icons::routes as icons_routes,
|
icons::routes as icons_routes,
|
||||||
identity::routes as identity_routes,
|
identity::routes as identity_routes,
|
||||||
notifications::routes as notifications_routes,
|
notifications::routes as notifications_routes,
|
||||||
|
|||||||
@@ -4,7 +4,7 @@ use rocket::Route;
|
|||||||
use rocket_contrib::json::Json;
|
use rocket_contrib::json::Json;
|
||||||
use serde_json::Value as JsonValue;
|
use serde_json::Value as JsonValue;
|
||||||
|
|
||||||
use crate::{api::EmptyResult, auth::Headers, db::DbConn, Error, CONFIG};
|
use crate::{api::EmptyResult, auth::Headers, Error, CONFIG};
|
||||||
|
|
||||||
pub fn routes() -> Vec<Route> {
|
pub fn routes() -> Vec<Route> {
|
||||||
routes![negotiate, websockets_err]
|
routes![negotiate, websockets_err]
|
||||||
@@ -30,7 +30,7 @@ fn websockets_err() -> EmptyResult {
|
|||||||
}
|
}
|
||||||
|
|
||||||
#[post("/hub/negotiate")]
|
#[post("/hub/negotiate")]
|
||||||
fn negotiate(_headers: Headers, _conn: DbConn) -> Json<JsonValue> {
|
fn negotiate(_headers: Headers) -> Json<JsonValue> {
|
||||||
use crate::crypto;
|
use crate::crypto;
|
||||||
use data_encoding::BASE64URL;
|
use data_encoding::BASE64URL;
|
||||||
|
|
||||||
@@ -65,7 +65,7 @@ use chashmap::CHashMap;
|
|||||||
use chrono::NaiveDateTime;
|
use chrono::NaiveDateTime;
|
||||||
use serde_json::from_str;
|
use serde_json::from_str;
|
||||||
|
|
||||||
use crate::db::models::{Cipher, Folder, User};
|
use crate::db::models::{Cipher, Folder, Send, User};
|
||||||
|
|
||||||
use rmpv::Value;
|
use rmpv::Value;
|
||||||
|
|
||||||
@@ -335,6 +335,23 @@ impl WebSocketUsers {
|
|||||||
self.send_update(uuid, &data).ok();
|
self.send_update(uuid, &data).ok();
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
pub fn send_send_update(&self, ut: UpdateType, send: &Send, user_uuids: &[String]) {
|
||||||
|
let user_uuid = convert_option(send.user_uuid.clone());
|
||||||
|
|
||||||
|
let data = create_update(
|
||||||
|
vec![
|
||||||
|
("Id".into(), send.uuid.clone().into()),
|
||||||
|
("UserId".into(), user_uuid),
|
||||||
|
("RevisionDate".into(), serialize_date(send.revision_date)),
|
||||||
|
],
|
||||||
|
ut,
|
||||||
|
);
|
||||||
|
|
||||||
|
for uuid in user_uuids {
|
||||||
|
self.send_update(uuid, &data).ok();
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/* Message Structure
|
/* Message Structure
|
||||||
|
|||||||
@@ -64,8 +64,10 @@ fn attachments(uuid: SafeString, file_id: SafeString) -> Option<NamedFile> {
|
|||||||
NamedFile::open(Path::new(&CONFIG.attachments_folder()).join(uuid).join(file_id)).ok()
|
NamedFile::open(Path::new(&CONFIG.attachments_folder()).join(uuid).join(file_id)).ok()
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// We use DbConn here to let the alive healthcheck also verify the database connection.
|
||||||
|
use crate::db::DbConn;
|
||||||
#[get("/alive")]
|
#[get("/alive")]
|
||||||
fn alive() -> Json<String> {
|
fn alive(_conn: DbConn) -> Json<String> {
|
||||||
use crate::util::format_date;
|
use crate::util::format_date;
|
||||||
use chrono::Utc;
|
use chrono::Utc;
|
||||||
|
|
||||||
|
|||||||
43
src/auth.rs
43
src/auth.rs
@@ -22,6 +22,8 @@ static JWT_HEADER: Lazy<Header> = Lazy::new(|| Header::new(JWT_ALGORITHM));
|
|||||||
|
|
||||||
pub static JWT_LOGIN_ISSUER: Lazy<String> = Lazy::new(|| format!("{}|login", CONFIG.domain_origin()));
|
pub static JWT_LOGIN_ISSUER: Lazy<String> = Lazy::new(|| format!("{}|login", CONFIG.domain_origin()));
|
||||||
static JWT_INVITE_ISSUER: Lazy<String> = Lazy::new(|| format!("{}|invite", CONFIG.domain_origin()));
|
static JWT_INVITE_ISSUER: Lazy<String> = Lazy::new(|| format!("{}|invite", CONFIG.domain_origin()));
|
||||||
|
static JWT_EMERGENCY_ACCESS_INVITE_ISSUER: Lazy<String> =
|
||||||
|
Lazy::new(|| format!("{}|emergencyaccessinvite", CONFIG.domain_origin()));
|
||||||
static JWT_DELETE_ISSUER: Lazy<String> = Lazy::new(|| format!("{}|delete", CONFIG.domain_origin()));
|
static JWT_DELETE_ISSUER: Lazy<String> = Lazy::new(|| format!("{}|delete", CONFIG.domain_origin()));
|
||||||
static JWT_VERIFYEMAIL_ISSUER: Lazy<String> = Lazy::new(|| format!("{}|verifyemail", CONFIG.domain_origin()));
|
static JWT_VERIFYEMAIL_ISSUER: Lazy<String> = Lazy::new(|| format!("{}|verifyemail", CONFIG.domain_origin()));
|
||||||
static JWT_ADMIN_ISSUER: Lazy<String> = Lazy::new(|| format!("{}|admin", CONFIG.domain_origin()));
|
static JWT_ADMIN_ISSUER: Lazy<String> = Lazy::new(|| format!("{}|admin", CONFIG.domain_origin()));
|
||||||
@@ -75,6 +77,10 @@ pub fn decode_invite(token: &str) -> Result<InviteJwtClaims, Error> {
|
|||||||
decode_jwt(token, JWT_INVITE_ISSUER.to_string())
|
decode_jwt(token, JWT_INVITE_ISSUER.to_string())
|
||||||
}
|
}
|
||||||
|
|
||||||
|
pub fn decode_emergency_access_invite(token: &str) -> Result<EmergencyAccessInviteJwtClaims, Error> {
|
||||||
|
decode_jwt(token, JWT_EMERGENCY_ACCESS_INVITE_ISSUER.to_string())
|
||||||
|
}
|
||||||
|
|
||||||
pub fn decode_delete(token: &str) -> Result<BasicJwtClaims, Error> {
|
pub fn decode_delete(token: &str) -> Result<BasicJwtClaims, Error> {
|
||||||
decode_jwt(token, JWT_DELETE_ISSUER.to_string())
|
decode_jwt(token, JWT_DELETE_ISSUER.to_string())
|
||||||
}
|
}
|
||||||
@@ -159,6 +165,43 @@ pub fn generate_invite_claims(
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[derive(Debug, Serialize, Deserialize)]
|
||||||
|
pub struct EmergencyAccessInviteJwtClaims {
|
||||||
|
// Not before
|
||||||
|
pub nbf: i64,
|
||||||
|
// Expiration time
|
||||||
|
pub exp: i64,
|
||||||
|
// Issuer
|
||||||
|
pub iss: String,
|
||||||
|
// Subject
|
||||||
|
pub sub: String,
|
||||||
|
|
||||||
|
pub email: String,
|
||||||
|
pub emer_id: Option<String>,
|
||||||
|
pub grantor_name: Option<String>,
|
||||||
|
pub grantor_email: Option<String>,
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn generate_emergency_access_invite_claims(
|
||||||
|
uuid: String,
|
||||||
|
email: String,
|
||||||
|
emer_id: Option<String>,
|
||||||
|
grantor_name: Option<String>,
|
||||||
|
grantor_email: Option<String>,
|
||||||
|
) -> EmergencyAccessInviteJwtClaims {
|
||||||
|
let time_now = Utc::now().naive_utc();
|
||||||
|
EmergencyAccessInviteJwtClaims {
|
||||||
|
nbf: time_now.timestamp(),
|
||||||
|
exp: (time_now + Duration::days(5)).timestamp(),
|
||||||
|
iss: JWT_EMERGENCY_ACCESS_INVITE_ISSUER.to_string(),
|
||||||
|
sub: uuid,
|
||||||
|
email,
|
||||||
|
emer_id,
|
||||||
|
grantor_name,
|
||||||
|
grantor_email,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
#[derive(Debug, Serialize, Deserialize)]
|
#[derive(Debug, Serialize, Deserialize)]
|
||||||
pub struct BasicJwtClaims {
|
pub struct BasicJwtClaims {
|
||||||
// Not before
|
// Not before
|
||||||
|
|||||||
187
src/config.rs
187
src/config.rs
@@ -2,7 +2,6 @@ use std::process::exit;
|
|||||||
use std::sync::RwLock;
|
use std::sync::RwLock;
|
||||||
|
|
||||||
use once_cell::sync::Lazy;
|
use once_cell::sync::Lazy;
|
||||||
use regex::Regex;
|
|
||||||
use reqwest::Url;
|
use reqwest::Url;
|
||||||
|
|
||||||
use crate::{
|
use crate::{
|
||||||
@@ -23,21 +22,6 @@ pub static CONFIG: Lazy<Config> = Lazy::new(|| {
|
|||||||
})
|
})
|
||||||
});
|
});
|
||||||
|
|
||||||
static PRIVACY_REGEX: Lazy<Regex> = Lazy::new(|| Regex::new(r"[\w]").unwrap());
|
|
||||||
const PRIVACY_CONFIG: &[&str] = &[
|
|
||||||
"allowed_iframe_ancestors",
|
|
||||||
"database_url",
|
|
||||||
"domain_origin",
|
|
||||||
"domain_path",
|
|
||||||
"domain",
|
|
||||||
"helo_name",
|
|
||||||
"org_creation_users",
|
|
||||||
"signups_domains_whitelist",
|
|
||||||
"smtp_from",
|
|
||||||
"smtp_host",
|
|
||||||
"smtp_username",
|
|
||||||
];
|
|
||||||
|
|
||||||
pub type Pass = String;
|
pub type Pass = String;
|
||||||
|
|
||||||
macro_rules! make_config {
|
macro_rules! make_config {
|
||||||
@@ -61,7 +45,7 @@ macro_rules! make_config {
|
|||||||
_overrides: Vec<String>,
|
_overrides: Vec<String>,
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Debug, Clone, Default, Deserialize, Serialize)]
|
#[derive(Clone, Default, Deserialize, Serialize)]
|
||||||
pub struct ConfigBuilder {
|
pub struct ConfigBuilder {
|
||||||
$($(
|
$($(
|
||||||
#[serde(skip_serializing_if = "Option::is_none")]
|
#[serde(skip_serializing_if = "Option::is_none")]
|
||||||
@@ -133,19 +117,6 @@ macro_rules! make_config {
|
|||||||
builder
|
builder
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Returns a new builder with all the elements from self,
|
|
||||||
/// except those that are equal in both sides
|
|
||||||
fn _remove(&self, other: &Self) -> Self {
|
|
||||||
let mut builder = ConfigBuilder::default();
|
|
||||||
$($(
|
|
||||||
if &self.$name != &other.$name {
|
|
||||||
builder.$name = self.$name.clone();
|
|
||||||
}
|
|
||||||
|
|
||||||
)+)+
|
|
||||||
builder
|
|
||||||
}
|
|
||||||
|
|
||||||
fn build(&self) -> ConfigItems {
|
fn build(&self) -> ConfigItems {
|
||||||
let mut config = ConfigItems::default();
|
let mut config = ConfigItems::default();
|
||||||
let _domain_set = self.domain.is_some();
|
let _domain_set = self.domain.is_some();
|
||||||
@@ -161,12 +132,13 @@ macro_rules! make_config {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Debug, Clone, Default)]
|
#[derive(Clone, Default)]
|
||||||
pub struct ConfigItems { $($(pub $name: make_config!{@type $ty, $none_action}, )+)+ }
|
struct ConfigItems { $($( $name: make_config!{@type $ty, $none_action}, )+)+ }
|
||||||
|
|
||||||
#[allow(unused)]
|
#[allow(unused)]
|
||||||
impl Config {
|
impl Config {
|
||||||
$($(
|
$($(
|
||||||
|
$(#[doc = $doc])+
|
||||||
pub fn $name(&self) -> make_config!{@type $ty, $none_action} {
|
pub fn $name(&self) -> make_config!{@type $ty, $none_action} {
|
||||||
self.inner.read().unwrap().config.$name.clone()
|
self.inner.read().unwrap().config.$name.clone()
|
||||||
}
|
}
|
||||||
@@ -189,38 +161,91 @@ macro_rules! make_config {
|
|||||||
|
|
||||||
fn _get_doc(doc: &str) -> serde_json::Value {
|
fn _get_doc(doc: &str) -> serde_json::Value {
|
||||||
let mut split = doc.split("|>").map(str::trim);
|
let mut split = doc.split("|>").map(str::trim);
|
||||||
json!({
|
|
||||||
"name": split.next(),
|
// We do not use the json!() macro here since that causes a lot of macro recursion.
|
||||||
"description": split.next()
|
// This slows down compile time and it also causes issues with rust-analyzer
|
||||||
|
serde_json::Value::Object({
|
||||||
|
let mut doc_json = serde_json::Map::new();
|
||||||
|
doc_json.insert("name".into(), serde_json::to_value(split.next()).unwrap());
|
||||||
|
doc_json.insert("description".into(), serde_json::to_value(split.next()).unwrap());
|
||||||
|
doc_json
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
json!([ $({
|
// We do not use the json!() macro here since that causes a lot of macro recursion.
|
||||||
"group": stringify!($group),
|
// This slows down compile time and it also causes issues with rust-analyzer
|
||||||
"grouptoggle": stringify!($($group_enabled)?),
|
serde_json::Value::Array(<[_]>::into_vec(Box::new([
|
||||||
"groupdoc": make_config!{ @show $($groupdoc)? },
|
$(
|
||||||
"elements": [
|
serde_json::Value::Object({
|
||||||
$( {
|
let mut group = serde_json::Map::new();
|
||||||
"editable": $editable,
|
group.insert("group".into(), (stringify!($group)).into());
|
||||||
"name": stringify!($name),
|
group.insert("grouptoggle".into(), (stringify!($($group_enabled)?)).into());
|
||||||
"value": cfg.$name,
|
group.insert("groupdoc".into(), (make_config!{ @show $($groupdoc)? }).into());
|
||||||
"default": def.$name,
|
|
||||||
"type": _get_form_type(stringify!($ty)),
|
group.insert("elements".into(), serde_json::Value::Array(<[_]>::into_vec(Box::new([
|
||||||
"doc": _get_doc(concat!($($doc),+)),
|
$(
|
||||||
"overridden": overriden.contains(&stringify!($name).to_uppercase()),
|
serde_json::Value::Object({
|
||||||
}, )+
|
let mut element = serde_json::Map::new();
|
||||||
]}, )+ ])
|
element.insert("editable".into(), ($editable).into());
|
||||||
|
element.insert("name".into(), (stringify!($name)).into());
|
||||||
|
element.insert("value".into(), serde_json::to_value(cfg.$name).unwrap());
|
||||||
|
element.insert("default".into(), serde_json::to_value(def.$name).unwrap());
|
||||||
|
element.insert("type".into(), (_get_form_type(stringify!($ty))).into());
|
||||||
|
element.insert("doc".into(), (_get_doc(concat!($($doc),+))).into());
|
||||||
|
element.insert("overridden".into(), (overriden.contains(&stringify!($name).to_uppercase())).into());
|
||||||
|
element
|
||||||
|
}),
|
||||||
|
)+
|
||||||
|
]))));
|
||||||
|
group
|
||||||
|
}),
|
||||||
|
)+
|
||||||
|
])))
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn get_support_json(&self) -> serde_json::Value {
|
pub fn get_support_json(&self) -> serde_json::Value {
|
||||||
|
// Define which config keys need to be masked.
|
||||||
|
// Pass types will always be masked and no need to put them in the list.
|
||||||
|
// Besides Pass, only String types will be masked via _privacy_mask.
|
||||||
|
const PRIVACY_CONFIG: &[&str] = &[
|
||||||
|
"allowed_iframe_ancestors",
|
||||||
|
"database_url",
|
||||||
|
"domain_origin",
|
||||||
|
"domain_path",
|
||||||
|
"domain",
|
||||||
|
"helo_name",
|
||||||
|
"org_creation_users",
|
||||||
|
"signups_domains_whitelist",
|
||||||
|
"smtp_from",
|
||||||
|
"smtp_host",
|
||||||
|
"smtp_username",
|
||||||
|
];
|
||||||
|
|
||||||
let cfg = {
|
let cfg = {
|
||||||
let inner = &self.inner.read().unwrap();
|
let inner = &self.inner.read().unwrap();
|
||||||
inner.config.clone()
|
inner.config.clone()
|
||||||
};
|
};
|
||||||
|
|
||||||
json!({ $($(
|
/// We map over the string and remove all alphanumeric, _ and - characters.
|
||||||
stringify!($name): make_config!{ @supportstr $name, cfg.$name, $ty, $none_action },
|
/// This is the fastest way (within micro-seconds) instead of using a regex (which takes mili-seconds)
|
||||||
)+)+ })
|
fn _privacy_mask(value: &str) -> String {
|
||||||
|
value.chars().map(|c|
|
||||||
|
match c {
|
||||||
|
c if c.is_alphanumeric() => '*',
|
||||||
|
'_' => '*',
|
||||||
|
'-' => '*',
|
||||||
|
_ => c
|
||||||
|
}
|
||||||
|
).collect::<String>()
|
||||||
|
}
|
||||||
|
|
||||||
|
serde_json::Value::Object({
|
||||||
|
let mut json = serde_json::Map::new();
|
||||||
|
$($(
|
||||||
|
json.insert(stringify!($name).into(), make_config!{ @supportstr $name, cfg.$name, $ty, $none_action });
|
||||||
|
)+)+;
|
||||||
|
json
|
||||||
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn get_overrides(&self) -> Vec<String> {
|
pub fn get_overrides(&self) -> Vec<String> {
|
||||||
@@ -228,29 +253,30 @@ macro_rules! make_config {
|
|||||||
let inner = &self.inner.read().unwrap();
|
let inner = &self.inner.read().unwrap();
|
||||||
inner._overrides.clone()
|
inner._overrides.clone()
|
||||||
};
|
};
|
||||||
|
|
||||||
overrides
|
overrides
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
// Support string print
|
// Support string print
|
||||||
( @supportstr $name:ident, $value:expr, Pass, option ) => { $value.as_ref().map(|_| String::from("***")) }; // Optional pass, we map to an Option<String> with "***"
|
( @supportstr $name:ident, $value:expr, Pass, option ) => { serde_json::to_value($value.as_ref().map(|_| String::from("***"))).unwrap() }; // Optional pass, we map to an Option<String> with "***"
|
||||||
( @supportstr $name:ident, $value:expr, Pass, $none_action:ident ) => { String::from("***") }; // Required pass, we return "***"
|
( @supportstr $name:ident, $value:expr, Pass, $none_action:ident ) => { "***".into() }; // Required pass, we return "***"
|
||||||
( @supportstr $name:ident, $value:expr, $ty:ty, option ) => { // Optional other value, we return as is or convert to string to apply the privacy config
|
( @supportstr $name:ident, $value:expr, String, option ) => { // Optional other value, we return as is or convert to string to apply the privacy config
|
||||||
if PRIVACY_CONFIG.contains(&stringify!($name)) {
|
if PRIVACY_CONFIG.contains(&stringify!($name)) {
|
||||||
json!($value.as_ref().map(|x| PRIVACY_REGEX.replace_all(&x.to_string(), "${1}*").to_string()))
|
serde_json::to_value($value.as_ref().map(|x| _privacy_mask(x) )).unwrap()
|
||||||
} else {
|
} else {
|
||||||
json!($value)
|
serde_json::to_value($value).unwrap()
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
( @supportstr $name:ident, $value:expr, $ty:ty, $none_action:ident ) => { // Required other value, we return as is or convert to string to apply the privacy config
|
( @supportstr $name:ident, $value:expr, String, $none_action:ident ) => { // Required other value, we return as is or convert to string to apply the privacy config
|
||||||
if PRIVACY_CONFIG.contains(&stringify!($name)) {
|
if PRIVACY_CONFIG.contains(&stringify!($name)) {
|
||||||
json!(PRIVACY_REGEX.replace_all(&$value.to_string(), "${1}*").to_string())
|
_privacy_mask(&$value).into()
|
||||||
} else {
|
} else {
|
||||||
json!($value)
|
($value).into()
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
( @supportstr $name:ident, $value:expr, $ty:ty, option ) => { serde_json::to_value($value).unwrap() }; // Optional other value, we return as is or convert to string to apply the privacy config
|
||||||
|
( @supportstr $name:ident, $value:expr, $ty:ty, $none_action:ident ) => { ($value).into() }; // Required other value, we return as is or convert to string to apply the privacy config
|
||||||
|
|
||||||
// Group or empty string
|
// Group or empty string
|
||||||
( @show ) => { "" };
|
( @show ) => { "" };
|
||||||
@@ -300,8 +326,6 @@ make_config! {
|
|||||||
data_folder: String, false, def, "data".to_string();
|
data_folder: String, false, def, "data".to_string();
|
||||||
/// Database URL
|
/// Database URL
|
||||||
database_url: String, false, auto, |c| format!("{}/{}", c.data_folder, "db.sqlite3");
|
database_url: String, false, auto, |c| format!("{}/{}", c.data_folder, "db.sqlite3");
|
||||||
/// Database connection pool size
|
|
||||||
database_max_conns: u32, false, def, 10;
|
|
||||||
/// Icon cache folder
|
/// Icon cache folder
|
||||||
icon_cache_folder: String, false, auto, |c| format!("{}/{}", c.data_folder, "icon_cache");
|
icon_cache_folder: String, false, auto, |c| format!("{}/{}", c.data_folder, "icon_cache");
|
||||||
/// Attachments folder
|
/// Attachments folder
|
||||||
@@ -333,6 +357,15 @@ make_config! {
|
|||||||
/// Trash purge schedule |> Cron schedule of the job that checks for trashed items to delete permanently.
|
/// Trash purge schedule |> Cron schedule of the job that checks for trashed items to delete permanently.
|
||||||
/// Defaults to daily. Set blank to disable this job.
|
/// Defaults to daily. Set blank to disable this job.
|
||||||
trash_purge_schedule: String, false, def, "0 5 0 * * *".to_string();
|
trash_purge_schedule: String, false, def, "0 5 0 * * *".to_string();
|
||||||
|
/// Incomplete 2FA login schedule |> Cron schedule of the job that checks for incomplete 2FA logins.
|
||||||
|
/// Defaults to once every minute. Set blank to disable this job.
|
||||||
|
incomplete_2fa_schedule: String, false, def, "30 * * * * *".to_string();
|
||||||
|
/// Emergency notification reminder schedule |> Cron schedule of the job that sends expiration reminders to emergency access grantors.
|
||||||
|
/// Defaults to hourly. Set blank to disable this job.
|
||||||
|
emergency_notification_reminder_schedule: String, false, def, "0 5 * * * *".to_string();
|
||||||
|
/// Emergency request timeout schedule |> Cron schedule of the job that grants emergency access requests that have met the required wait time.
|
||||||
|
/// Defaults to hourly. Set blank to disable this job.
|
||||||
|
emergency_request_timeout_schedule: String, false, def, "0 5 * * * *".to_string();
|
||||||
},
|
},
|
||||||
|
|
||||||
/// General settings
|
/// General settings
|
||||||
@@ -366,6 +399,13 @@ make_config! {
|
|||||||
/// sure to inform all users of any changes to this setting.
|
/// sure to inform all users of any changes to this setting.
|
||||||
trash_auto_delete_days: i64, true, option;
|
trash_auto_delete_days: i64, true, option;
|
||||||
|
|
||||||
|
/// Incomplete 2FA time limit |> Number of minutes to wait before a 2FA-enabled login is
|
||||||
|
/// considered incomplete, resulting in an email notification. An incomplete 2FA login is one
|
||||||
|
/// where the correct master password was provided but the required 2FA step was not completed,
|
||||||
|
/// which potentially indicates a master password compromise. Set to 0 to disable this check.
|
||||||
|
/// This setting applies globally to all users.
|
||||||
|
incomplete_2fa_time_limit: i64, true, def, 3;
|
||||||
|
|
||||||
/// Disable icon downloads |> Set to true to disable icon downloading, this would still serve icons from
|
/// Disable icon downloads |> Set to true to disable icon downloading, this would still serve icons from
|
||||||
/// $ICON_CACHE_FOLDER, but it won't produce any external network request. Needs to set $ICON_CACHE_TTL to 0,
|
/// $ICON_CACHE_FOLDER, but it won't produce any external network request. Needs to set $ICON_CACHE_TTL to 0,
|
||||||
/// otherwise it will delete them and they won't be downloaded again.
|
/// otherwise it will delete them and they won't be downloaded again.
|
||||||
@@ -385,6 +425,8 @@ make_config! {
|
|||||||
org_creation_users: String, true, def, "".to_string();
|
org_creation_users: String, true, def, "".to_string();
|
||||||
/// Allow invitations |> Controls whether users can be invited by organization admins, even when signups are otherwise disabled
|
/// Allow invitations |> Controls whether users can be invited by organization admins, even when signups are otherwise disabled
|
||||||
invitations_allowed: bool, true, def, true;
|
invitations_allowed: bool, true, def, true;
|
||||||
|
/// Allow emergency access |> Controls whether users can enable emergency access to their accounts. This setting applies globally to all users.
|
||||||
|
emergency_access_allowed: bool, true, def, true;
|
||||||
/// Password iterations |> Number of server-side passwords hashing iterations.
|
/// Password iterations |> Number of server-side passwords hashing iterations.
|
||||||
/// The changes only apply when a user changes their password. Not recommended to lower the value
|
/// The changes only apply when a user changes their password. Not recommended to lower the value
|
||||||
password_iterations: i32, true, def, 100_000;
|
password_iterations: i32, true, def, 100_000;
|
||||||
@@ -453,6 +495,9 @@ make_config! {
|
|||||||
/// Max database connection retries |> Number of times to retry the database connection during startup, with 1 second between each retry, set to 0 to retry indefinitely
|
/// Max database connection retries |> Number of times to retry the database connection during startup, with 1 second between each retry, set to 0 to retry indefinitely
|
||||||
db_connection_retries: u32, false, def, 15;
|
db_connection_retries: u32, false, def, 15;
|
||||||
|
|
||||||
|
/// Database connection pool size
|
||||||
|
database_max_conns: u32, false, def, 10;
|
||||||
|
|
||||||
/// Bypass admin page security (Know the risks!) |> Disables the Admin Token for the admin page so you may use your own auth in-front
|
/// Bypass admin page security (Know the risks!) |> Disables the Admin Token for the admin page so you may use your own auth in-front
|
||||||
disable_admin_token: bool, true, def, false;
|
disable_admin_token: bool, true, def, false;
|
||||||
|
|
||||||
@@ -607,7 +652,7 @@ fn validate_config(cfg: &ConfigItems) -> Result<(), Error> {
|
|||||||
|
|
||||||
// Check if the icon blacklist regex is valid
|
// Check if the icon blacklist regex is valid
|
||||||
if let Some(ref r) = cfg.icon_blacklist_regex {
|
if let Some(ref r) = cfg.icon_blacklist_regex {
|
||||||
let validate_regex = Regex::new(r);
|
let validate_regex = regex::Regex::new(r);
|
||||||
match validate_regex {
|
match validate_regex {
|
||||||
Ok(_) => (),
|
Ok(_) => (),
|
||||||
Err(e) => err!(format!("`ICON_BLACKLIST_REGEX` is invalid: {:#?}", e)),
|
Err(e) => err!(format!("`ICON_BLACKLIST_REGEX` is invalid: {:#?}", e)),
|
||||||
@@ -699,7 +744,7 @@ impl Config {
|
|||||||
Ok(())
|
Ok(())
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn update_config_partial(&self, other: ConfigBuilder) -> Result<(), Error> {
|
fn update_config_partial(&self, other: ConfigBuilder) -> Result<(), Error> {
|
||||||
let builder = {
|
let builder = {
|
||||||
let usr = &self.inner.read().unwrap()._usr;
|
let usr = &self.inner.read().unwrap()._usr;
|
||||||
let mut _overrides = Vec::new();
|
let mut _overrides = Vec::new();
|
||||||
@@ -853,13 +898,23 @@ where
|
|||||||
|
|
||||||
reg!("email/change_email", ".html");
|
reg!("email/change_email", ".html");
|
||||||
reg!("email/delete_account", ".html");
|
reg!("email/delete_account", ".html");
|
||||||
|
reg!("email/emergency_access_invite_accepted", ".html");
|
||||||
|
reg!("email/emergency_access_invite_confirmed", ".html");
|
||||||
|
reg!("email/emergency_access_recovery_approved", ".html");
|
||||||
|
reg!("email/emergency_access_recovery_initiated", ".html");
|
||||||
|
reg!("email/emergency_access_recovery_rejected", ".html");
|
||||||
|
reg!("email/emergency_access_recovery_reminder", ".html");
|
||||||
|
reg!("email/emergency_access_recovery_timed_out", ".html");
|
||||||
|
reg!("email/incomplete_2fa_login", ".html");
|
||||||
reg!("email/invite_accepted", ".html");
|
reg!("email/invite_accepted", ".html");
|
||||||
reg!("email/invite_confirmed", ".html");
|
reg!("email/invite_confirmed", ".html");
|
||||||
reg!("email/new_device_logged_in", ".html");
|
reg!("email/new_device_logged_in", ".html");
|
||||||
reg!("email/pw_hint_none", ".html");
|
reg!("email/pw_hint_none", ".html");
|
||||||
reg!("email/pw_hint_some", ".html");
|
reg!("email/pw_hint_some", ".html");
|
||||||
reg!("email/send_2fa_removed_from_org", ".html");
|
reg!("email/send_2fa_removed_from_org", ".html");
|
||||||
|
reg!("email/send_single_org_removed_from_org", ".html");
|
||||||
reg!("email/send_org_invite", ".html");
|
reg!("email/send_org_invite", ".html");
|
||||||
|
reg!("email/send_emergency_access_invite", ".html");
|
||||||
reg!("email/twofactor_email", ".html");
|
reg!("email/twofactor_email", ".html");
|
||||||
reg!("email/verify_email", ".html");
|
reg!("email/verify_email", ".html");
|
||||||
reg!("email/welcome", ".html");
|
reg!("email/welcome", ".html");
|
||||||
|
|||||||
@@ -278,7 +278,6 @@ impl<'a, 'r> FromRequest<'a, 'r> for DbConn {
|
|||||||
// https://docs.rs/diesel_migrations/*/diesel_migrations/macro.embed_migrations.html
|
// https://docs.rs/diesel_migrations/*/diesel_migrations/macro.embed_migrations.html
|
||||||
#[cfg(sqlite)]
|
#[cfg(sqlite)]
|
||||||
mod sqlite_migrations {
|
mod sqlite_migrations {
|
||||||
#[allow(unused_imports)]
|
|
||||||
embed_migrations!("migrations/sqlite");
|
embed_migrations!("migrations/sqlite");
|
||||||
|
|
||||||
pub fn run_migrations() -> Result<(), super::Error> {
|
pub fn run_migrations() -> Result<(), super::Error> {
|
||||||
@@ -315,7 +314,6 @@ mod sqlite_migrations {
|
|||||||
|
|
||||||
#[cfg(mysql)]
|
#[cfg(mysql)]
|
||||||
mod mysql_migrations {
|
mod mysql_migrations {
|
||||||
#[allow(unused_imports)]
|
|
||||||
embed_migrations!("migrations/mysql");
|
embed_migrations!("migrations/mysql");
|
||||||
|
|
||||||
pub fn run_migrations() -> Result<(), super::Error> {
|
pub fn run_migrations() -> Result<(), super::Error> {
|
||||||
@@ -336,7 +334,6 @@ mod mysql_migrations {
|
|||||||
|
|
||||||
#[cfg(postgresql)]
|
#[cfg(postgresql)]
|
||||||
mod postgresql_migrations {
|
mod postgresql_migrations {
|
||||||
#[allow(unused_imports)]
|
|
||||||
embed_migrations!("migrations/postgresql");
|
embed_migrations!("migrations/postgresql");
|
||||||
|
|
||||||
pub fn run_migrations() -> Result<(), super::Error> {
|
pub fn run_migrations() -> Result<(), super::Error> {
|
||||||
|
|||||||
@@ -143,16 +143,6 @@ impl Attachment {
|
|||||||
}}
|
}}
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn find_by_ciphers(cipher_uuids: Vec<String>, conn: &DbConn) -> Vec<Self> {
|
|
||||||
db_run! { conn: {
|
|
||||||
attachments::table
|
|
||||||
.filter(attachments::cipher_uuid.eq_any(cipher_uuids))
|
|
||||||
.load::<AttachmentDb>(conn)
|
|
||||||
.expect("Error loading attachments")
|
|
||||||
.from_db()
|
|
||||||
}}
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn size_by_user(user_uuid: &str, conn: &DbConn) -> i64 {
|
pub fn size_by_user(user_uuid: &str, conn: &DbConn) -> i64 {
|
||||||
db_run! { conn: {
|
db_run! { conn: {
|
||||||
let result: Option<i64> = attachments::table
|
let result: Option<i64> = attachments::table
|
||||||
|
|||||||
@@ -343,36 +343,39 @@ impl Cipher {
|
|||||||
db_run! {conn: {
|
db_run! {conn: {
|
||||||
// Check whether this cipher is in any collections accessible to the
|
// Check whether this cipher is in any collections accessible to the
|
||||||
// user. If so, retrieve the access flags for each collection.
|
// user. If so, retrieve the access flags for each collection.
|
||||||
let query = ciphers::table
|
let rows = ciphers::table
|
||||||
.filter(ciphers::uuid.eq(&self.uuid))
|
.filter(ciphers::uuid.eq(&self.uuid))
|
||||||
.inner_join(ciphers_collections::table.on(
|
.inner_join(ciphers_collections::table.on(
|
||||||
ciphers::uuid.eq(ciphers_collections::cipher_uuid)))
|
ciphers::uuid.eq(ciphers_collections::cipher_uuid)))
|
||||||
.inner_join(users_collections::table.on(
|
.inner_join(users_collections::table.on(
|
||||||
ciphers_collections::collection_uuid.eq(users_collections::collection_uuid)
|
ciphers_collections::collection_uuid.eq(users_collections::collection_uuid)
|
||||||
.and(users_collections::user_uuid.eq(user_uuid))))
|
.and(users_collections::user_uuid.eq(user_uuid))))
|
||||||
.select((users_collections::read_only, users_collections::hide_passwords));
|
.select((users_collections::read_only, users_collections::hide_passwords))
|
||||||
|
.load::<(bool, bool)>(conn)
|
||||||
|
.expect("Error getting access restrictions");
|
||||||
|
|
||||||
// There's an edge case where a cipher can be in multiple collections
|
if rows.is_empty() {
|
||||||
// with inconsistent access flags. For example, a cipher could be in
|
// This cipher isn't in any collections accessible to the user.
|
||||||
// one collection where the user has read-only access, but also in
|
return None;
|
||||||
// another collection where the user has read/write access. To handle
|
}
|
||||||
// this, we do a boolean OR of all values in each of the `read_only`
|
|
||||||
// and `hide_passwords` columns. This could ideally be done as part
|
// A cipher can be in multiple collections with inconsistent access flags.
|
||||||
// of the query, but Diesel doesn't support a max() or bool_or()
|
// For example, a cipher could be in one collection where the user has
|
||||||
// function on booleans and this behavior isn't portable anyway.
|
// read-only access, but also in another collection where the user has
|
||||||
if let Ok(vec) = query.load::<(bool, bool)>(conn) {
|
// read/write access. For a flag to be in effect for a cipher, upstream
|
||||||
let mut read_only = false;
|
// requires all collections the cipher is in to have that flag set.
|
||||||
let mut hide_passwords = false;
|
// Therefore, we do a boolean AND of all values in each of the `read_only`
|
||||||
for (ro, hp) in vec.iter() {
|
// and `hide_passwords` columns. This could ideally be done as part of the
|
||||||
read_only |= ro;
|
// query, but Diesel doesn't support a min() or bool_and() function on
|
||||||
hide_passwords |= hp;
|
// booleans and this behavior isn't portable anyway.
|
||||||
|
let mut read_only = true;
|
||||||
|
let mut hide_passwords = true;
|
||||||
|
for (ro, hp) in rows.iter() {
|
||||||
|
read_only &= ro;
|
||||||
|
hide_passwords &= hp;
|
||||||
}
|
}
|
||||||
|
|
||||||
Some((read_only, hide_passwords))
|
Some((read_only, hide_passwords))
|
||||||
} else {
|
|
||||||
// This cipher isn't in any collections accessible to the user.
|
|
||||||
None
|
|
||||||
}
|
|
||||||
}}
|
}}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -17,8 +17,7 @@ db_object! {
|
|||||||
pub user_uuid: String,
|
pub user_uuid: String,
|
||||||
|
|
||||||
pub name: String,
|
pub name: String,
|
||||||
// https://github.com/bitwarden/core/tree/master/src/Core/Enums
|
pub atype: i32, // https://github.com/bitwarden/server/blob/master/src/Core/Enums/DeviceType.cs
|
||||||
pub atype: i32,
|
|
||||||
pub push_token: Option<String>,
|
pub push_token: Option<String>,
|
||||||
|
|
||||||
pub refresh_token: String,
|
pub refresh_token: String,
|
||||||
|
|||||||
282
src/db/models/emergency_access.rs
Normal file
282
src/db/models/emergency_access.rs
Normal file
@@ -0,0 +1,282 @@
|
|||||||
|
use chrono::{NaiveDateTime, Utc};
|
||||||
|
use serde_json::Value;
|
||||||
|
|
||||||
|
use super::User;
|
||||||
|
|
||||||
|
db_object! {
|
||||||
|
#[derive(Debug, Identifiable, Queryable, Insertable, Associations, AsChangeset)]
|
||||||
|
#[table_name = "emergency_access"]
|
||||||
|
#[changeset_options(treat_none_as_null="true")]
|
||||||
|
#[belongs_to(User, foreign_key = "grantor_uuid")]
|
||||||
|
#[primary_key(uuid)]
|
||||||
|
pub struct EmergencyAccess {
|
||||||
|
pub uuid: String,
|
||||||
|
pub grantor_uuid: String,
|
||||||
|
pub grantee_uuid: Option<String>,
|
||||||
|
pub email: Option<String>,
|
||||||
|
pub key_encrypted: Option<String>,
|
||||||
|
pub atype: i32, //EmergencyAccessType
|
||||||
|
pub status: i32, //EmergencyAccessStatus
|
||||||
|
pub wait_time_days: i32,
|
||||||
|
pub recovery_initiated_at: Option<NaiveDateTime>,
|
||||||
|
pub last_notification_at: Option<NaiveDateTime>,
|
||||||
|
pub updated_at: NaiveDateTime,
|
||||||
|
pub created_at: NaiveDateTime,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Local methods
|
||||||
|
|
||||||
|
impl EmergencyAccess {
|
||||||
|
pub fn new(grantor_uuid: String, email: Option<String>, status: i32, atype: i32, wait_time_days: i32) -> Self {
|
||||||
|
let now = Utc::now().naive_utc();
|
||||||
|
|
||||||
|
Self {
|
||||||
|
uuid: crate::util::get_uuid(),
|
||||||
|
grantor_uuid,
|
||||||
|
grantee_uuid: None,
|
||||||
|
email,
|
||||||
|
status,
|
||||||
|
atype,
|
||||||
|
wait_time_days,
|
||||||
|
recovery_initiated_at: None,
|
||||||
|
created_at: now,
|
||||||
|
updated_at: now,
|
||||||
|
key_encrypted: None,
|
||||||
|
last_notification_at: None,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn get_type_as_str(&self) -> &'static str {
|
||||||
|
if self.atype == EmergencyAccessType::View as i32 {
|
||||||
|
"View"
|
||||||
|
} else {
|
||||||
|
"Takeover"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn has_type(&self, access_type: EmergencyAccessType) -> bool {
|
||||||
|
self.atype == access_type as i32
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn has_status(&self, status: EmergencyAccessStatus) -> bool {
|
||||||
|
self.status == status as i32
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn to_json(&self) -> Value {
|
||||||
|
json!({
|
||||||
|
"Id": self.uuid,
|
||||||
|
"Status": self.status,
|
||||||
|
"Type": self.atype,
|
||||||
|
"WaitTimeDays": self.wait_time_days,
|
||||||
|
"Object": "emergencyAccess",
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn to_json_grantor_details(&self, conn: &DbConn) -> Value {
|
||||||
|
let grantor_user = User::find_by_uuid(&self.grantor_uuid, conn).expect("Grantor user not found.");
|
||||||
|
|
||||||
|
json!({
|
||||||
|
"Id": self.uuid,
|
||||||
|
"Status": self.status,
|
||||||
|
"Type": self.atype,
|
||||||
|
"WaitTimeDays": self.wait_time_days,
|
||||||
|
"GrantorId": grantor_user.uuid,
|
||||||
|
"Email": grantor_user.email,
|
||||||
|
"Name": grantor_user.name,
|
||||||
|
"Object": "emergencyAccessGrantorDetails",
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
#[allow(clippy::manual_map)]
|
||||||
|
pub fn to_json_grantee_details(&self, conn: &DbConn) -> Value {
|
||||||
|
let grantee_user = if let Some(grantee_uuid) = self.grantee_uuid.as_deref() {
|
||||||
|
Some(User::find_by_uuid(grantee_uuid, conn).expect("Grantee user not found."))
|
||||||
|
} else if let Some(email) = self.email.as_deref() {
|
||||||
|
Some(User::find_by_mail(email, conn).expect("Grantee user not found."))
|
||||||
|
} else {
|
||||||
|
None
|
||||||
|
};
|
||||||
|
|
||||||
|
json!({
|
||||||
|
"Id": self.uuid,
|
||||||
|
"Status": self.status,
|
||||||
|
"Type": self.atype,
|
||||||
|
"WaitTimeDays": self.wait_time_days,
|
||||||
|
"GranteeId": grantee_user.as_ref().map_or("", |u| &u.uuid),
|
||||||
|
"Email": grantee_user.as_ref().map_or("", |u| &u.email),
|
||||||
|
"Name": grantee_user.as_ref().map_or("", |u| &u.name),
|
||||||
|
"Object": "emergencyAccessGranteeDetails",
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Copy, Clone, PartialEq, Eq, num_derive::FromPrimitive)]
|
||||||
|
pub enum EmergencyAccessType {
|
||||||
|
View = 0,
|
||||||
|
Takeover = 1,
|
||||||
|
}
|
||||||
|
|
||||||
|
impl EmergencyAccessType {
|
||||||
|
pub fn from_str(s: &str) -> Option<Self> {
|
||||||
|
match s {
|
||||||
|
"0" | "View" => Some(EmergencyAccessType::View),
|
||||||
|
"1" | "Takeover" => Some(EmergencyAccessType::Takeover),
|
||||||
|
_ => None,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl PartialEq<i32> for EmergencyAccessType {
|
||||||
|
fn eq(&self, other: &i32) -> bool {
|
||||||
|
*other == *self as i32
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl PartialEq<EmergencyAccessType> for i32 {
|
||||||
|
fn eq(&self, other: &EmergencyAccessType) -> bool {
|
||||||
|
*self == *other as i32
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub enum EmergencyAccessStatus {
|
||||||
|
Invited = 0,
|
||||||
|
Accepted = 1,
|
||||||
|
Confirmed = 2,
|
||||||
|
RecoveryInitiated = 3,
|
||||||
|
RecoveryApproved = 4,
|
||||||
|
}
|
||||||
|
|
||||||
|
// region Database methods
|
||||||
|
|
||||||
|
use crate::db::DbConn;
|
||||||
|
|
||||||
|
use crate::api::EmptyResult;
|
||||||
|
use crate::error::MapResult;
|
||||||
|
|
||||||
|
impl EmergencyAccess {
|
||||||
|
pub fn save(&mut self, conn: &DbConn) -> EmptyResult {
|
||||||
|
User::update_uuid_revision(&self.grantor_uuid, conn);
|
||||||
|
self.updated_at = Utc::now().naive_utc();
|
||||||
|
|
||||||
|
db_run! { conn:
|
||||||
|
sqlite, mysql {
|
||||||
|
match diesel::replace_into(emergency_access::table)
|
||||||
|
.values(EmergencyAccessDb::to_db(self))
|
||||||
|
.execute(conn)
|
||||||
|
{
|
||||||
|
Ok(_) => Ok(()),
|
||||||
|
// Record already exists and causes a Foreign Key Violation because replace_into() wants to delete the record first.
|
||||||
|
Err(diesel::result::Error::DatabaseError(diesel::result::DatabaseErrorKind::ForeignKeyViolation, _)) => {
|
||||||
|
diesel::update(emergency_access::table)
|
||||||
|
.filter(emergency_access::uuid.eq(&self.uuid))
|
||||||
|
.set(EmergencyAccessDb::to_db(self))
|
||||||
|
.execute(conn)
|
||||||
|
.map_res("Error updating emergency access")
|
||||||
|
}
|
||||||
|
Err(e) => Err(e.into()),
|
||||||
|
}.map_res("Error saving emergency access")
|
||||||
|
}
|
||||||
|
postgresql {
|
||||||
|
let value = EmergencyAccessDb::to_db(self);
|
||||||
|
diesel::insert_into(emergency_access::table)
|
||||||
|
.values(&value)
|
||||||
|
.on_conflict(emergency_access::uuid)
|
||||||
|
.do_update()
|
||||||
|
.set(&value)
|
||||||
|
.execute(conn)
|
||||||
|
.map_res("Error saving emergency access")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn delete_all_by_user(user_uuid: &str, conn: &DbConn) -> EmptyResult {
|
||||||
|
for ea in Self::find_all_by_grantor_uuid(user_uuid, conn) {
|
||||||
|
ea.delete(conn)?;
|
||||||
|
}
|
||||||
|
for ea in Self::find_all_by_grantee_uuid(user_uuid, conn) {
|
||||||
|
ea.delete(conn)?;
|
||||||
|
}
|
||||||
|
Ok(())
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn delete(self, conn: &DbConn) -> EmptyResult {
|
||||||
|
User::update_uuid_revision(&self.grantor_uuid, conn);
|
||||||
|
|
||||||
|
db_run! { conn: {
|
||||||
|
diesel::delete(emergency_access::table.filter(emergency_access::uuid.eq(self.uuid)))
|
||||||
|
.execute(conn)
|
||||||
|
.map_res("Error removing user from emergency access")
|
||||||
|
}}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn find_by_uuid(uuid: &str, conn: &DbConn) -> Option<Self> {
|
||||||
|
db_run! { conn: {
|
||||||
|
emergency_access::table
|
||||||
|
.filter(emergency_access::uuid.eq(uuid))
|
||||||
|
.first::<EmergencyAccessDb>(conn)
|
||||||
|
.ok().from_db()
|
||||||
|
}}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn find_by_grantor_uuid_and_grantee_uuid_or_email(
|
||||||
|
grantor_uuid: &str,
|
||||||
|
grantee_uuid: &str,
|
||||||
|
email: &str,
|
||||||
|
conn: &DbConn,
|
||||||
|
) -> Option<Self> {
|
||||||
|
db_run! { conn: {
|
||||||
|
emergency_access::table
|
||||||
|
.filter(emergency_access::grantor_uuid.eq(grantor_uuid))
|
||||||
|
.filter(emergency_access::grantee_uuid.eq(grantee_uuid).or(emergency_access::email.eq(email)))
|
||||||
|
.first::<EmergencyAccessDb>(conn)
|
||||||
|
.ok().from_db()
|
||||||
|
}}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn find_all_recoveries(conn: &DbConn) -> Vec<Self> {
|
||||||
|
db_run! { conn: {
|
||||||
|
emergency_access::table
|
||||||
|
.filter(emergency_access::status.eq(EmergencyAccessStatus::RecoveryInitiated as i32))
|
||||||
|
.load::<EmergencyAccessDb>(conn).expect("Error loading emergency_access").from_db()
|
||||||
|
}}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn find_by_uuid_and_grantor_uuid(uuid: &str, grantor_uuid: &str, conn: &DbConn) -> Option<Self> {
|
||||||
|
db_run! { conn: {
|
||||||
|
emergency_access::table
|
||||||
|
.filter(emergency_access::uuid.eq(uuid))
|
||||||
|
.filter(emergency_access::grantor_uuid.eq(grantor_uuid))
|
||||||
|
.first::<EmergencyAccessDb>(conn)
|
||||||
|
.ok().from_db()
|
||||||
|
}}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn find_all_by_grantee_uuid(grantee_uuid: &str, conn: &DbConn) -> Vec<Self> {
|
||||||
|
db_run! { conn: {
|
||||||
|
emergency_access::table
|
||||||
|
.filter(emergency_access::grantee_uuid.eq(grantee_uuid))
|
||||||
|
.load::<EmergencyAccessDb>(conn).expect("Error loading emergency_access").from_db()
|
||||||
|
}}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn find_invited_by_grantee_email(grantee_email: &str, conn: &DbConn) -> Option<Self> {
|
||||||
|
db_run! { conn: {
|
||||||
|
emergency_access::table
|
||||||
|
.filter(emergency_access::email.eq(grantee_email))
|
||||||
|
.filter(emergency_access::status.eq(EmergencyAccessStatus::Invited as i32))
|
||||||
|
.first::<EmergencyAccessDb>(conn)
|
||||||
|
.ok().from_db()
|
||||||
|
}}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn find_all_by_grantor_uuid(grantor_uuid: &str, conn: &DbConn) -> Vec<Self> {
|
||||||
|
db_run! { conn: {
|
||||||
|
emergency_access::table
|
||||||
|
.filter(emergency_access::grantor_uuid.eq(grantor_uuid))
|
||||||
|
.load::<EmergencyAccessDb>(conn).expect("Error loading emergency_access").from_db()
|
||||||
|
}}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// endregion
|
||||||
@@ -2,22 +2,26 @@ mod attachment;
|
|||||||
mod cipher;
|
mod cipher;
|
||||||
mod collection;
|
mod collection;
|
||||||
mod device;
|
mod device;
|
||||||
|
mod emergency_access;
|
||||||
mod favorite;
|
mod favorite;
|
||||||
mod folder;
|
mod folder;
|
||||||
mod org_policy;
|
mod org_policy;
|
||||||
mod organization;
|
mod organization;
|
||||||
mod send;
|
mod send;
|
||||||
mod two_factor;
|
mod two_factor;
|
||||||
|
mod two_factor_incomplete;
|
||||||
mod user;
|
mod user;
|
||||||
|
|
||||||
pub use self::attachment::Attachment;
|
pub use self::attachment::Attachment;
|
||||||
pub use self::cipher::Cipher;
|
pub use self::cipher::Cipher;
|
||||||
pub use self::collection::{Collection, CollectionCipher, CollectionUser};
|
pub use self::collection::{Collection, CollectionCipher, CollectionUser};
|
||||||
pub use self::device::Device;
|
pub use self::device::Device;
|
||||||
|
pub use self::emergency_access::{EmergencyAccess, EmergencyAccessStatus, EmergencyAccessType};
|
||||||
pub use self::favorite::Favorite;
|
pub use self::favorite::Favorite;
|
||||||
pub use self::folder::{Folder, FolderCipher};
|
pub use self::folder::{Folder, FolderCipher};
|
||||||
pub use self::org_policy::{OrgPolicy, OrgPolicyType};
|
pub use self::org_policy::{OrgPolicy, OrgPolicyType};
|
||||||
pub use self::organization::{Organization, UserOrgStatus, UserOrgType, UserOrganization};
|
pub use self::organization::{Organization, UserOrgStatus, UserOrgType, UserOrganization};
|
||||||
pub use self::send::{Send, SendType};
|
pub use self::send::{Send, SendType};
|
||||||
pub use self::two_factor::{TwoFactor, TwoFactorType};
|
pub use self::two_factor::{TwoFactor, TwoFactorType};
|
||||||
|
pub use self::two_factor_incomplete::TwoFactorIncomplete;
|
||||||
pub use self::user::{Invitation, User, UserStampException};
|
pub use self::user::{Invitation, User, UserStampException};
|
||||||
|
|||||||
@@ -27,7 +27,7 @@ pub enum OrgPolicyType {
|
|||||||
TwoFactorAuthentication = 0,
|
TwoFactorAuthentication = 0,
|
||||||
MasterPassword = 1,
|
MasterPassword = 1,
|
||||||
PasswordGenerator = 2,
|
PasswordGenerator = 2,
|
||||||
// SingleOrg = 3, // Not currently supported.
|
SingleOrg = 3,
|
||||||
// RequireSso = 4, // Not currently supported.
|
// RequireSso = 4, // Not currently supported.
|
||||||
PersonalOwnership = 5,
|
PersonalOwnership = 5,
|
||||||
DisableSend = 6,
|
DisableSend = 6,
|
||||||
@@ -143,7 +143,7 @@ impl OrgPolicy {
|
|||||||
}}
|
}}
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn find_by_user(user_uuid: &str, conn: &DbConn) -> Vec<Self> {
|
pub fn find_confirmed_by_user(user_uuid: &str, conn: &DbConn) -> Vec<Self> {
|
||||||
db_run! { conn: {
|
db_run! { conn: {
|
||||||
org_policies::table
|
org_policies::table
|
||||||
.inner_join(
|
.inner_join(
|
||||||
@@ -184,8 +184,8 @@ impl OrgPolicy {
|
|||||||
/// and the user is not an owner or admin of that org. This is only useful for checking
|
/// and the user is not an owner or admin of that org. This is only useful for checking
|
||||||
/// applicability of policy types that have these particular semantics.
|
/// applicability of policy types that have these particular semantics.
|
||||||
pub fn is_applicable_to_user(user_uuid: &str, policy_type: OrgPolicyType, conn: &DbConn) -> bool {
|
pub fn is_applicable_to_user(user_uuid: &str, policy_type: OrgPolicyType, conn: &DbConn) -> bool {
|
||||||
// Returns confirmed users only.
|
// TODO: Should check confirmed and accepted users
|
||||||
for policy in OrgPolicy::find_by_user(user_uuid, conn) {
|
for policy in OrgPolicy::find_confirmed_by_user(user_uuid, conn) {
|
||||||
if policy.enabled && policy.has_type(policy_type) {
|
if policy.enabled && policy.has_type(policy_type) {
|
||||||
let org_uuid = &policy.org_uuid;
|
let org_uuid = &policy.org_uuid;
|
||||||
if let Some(user) = UserOrganization::find_by_user_and_org(user_uuid, org_uuid, conn) {
|
if let Some(user) = UserOrganization::find_by_user_and_org(user_uuid, org_uuid, conn) {
|
||||||
@@ -201,8 +201,7 @@ impl OrgPolicy {
|
|||||||
/// Returns true if the user belongs to an org that has enabled the `DisableHideEmail`
|
/// Returns true if the user belongs to an org that has enabled the `DisableHideEmail`
|
||||||
/// option of the `Send Options` policy, and the user is not an owner or admin of that org.
|
/// option of the `Send Options` policy, and the user is not an owner or admin of that org.
|
||||||
pub fn is_hide_email_disabled(user_uuid: &str, conn: &DbConn) -> bool {
|
pub fn is_hide_email_disabled(user_uuid: &str, conn: &DbConn) -> bool {
|
||||||
// Returns confirmed users only.
|
for policy in OrgPolicy::find_confirmed_by_user(user_uuid, conn) {
|
||||||
for policy in OrgPolicy::find_by_user(user_uuid, conn) {
|
|
||||||
if policy.enabled && policy.has_type(OrgPolicyType::SendOptions) {
|
if policy.enabled && policy.has_type(OrgPolicyType::SendOptions) {
|
||||||
let org_uuid = &policy.org_uuid;
|
let org_uuid = &policy.org_uuid;
|
||||||
if let Some(user) = UserOrganization::find_by_user_and_org(user_uuid, org_uuid, conn) {
|
if let Some(user) = UserOrganization::find_by_user_and_org(user_uuid, org_uuid, conn) {
|
||||||
|
|||||||
@@ -290,6 +290,8 @@ impl UserOrganization {
|
|||||||
// For now they still have that code also in the web-vault, but they will remove it at some point.
|
// For now they still have that code also in the web-vault, but they will remove it at some point.
|
||||||
// https://github.com/bitwarden/server/tree/master/bitwarden_license/src/
|
// https://github.com/bitwarden/server/tree/master/bitwarden_license/src/
|
||||||
"UseBusinessPortal": false, // Disable BusinessPortal Button
|
"UseBusinessPortal": false, // Disable BusinessPortal Button
|
||||||
|
"ProviderId": null,
|
||||||
|
"ProviderName": null,
|
||||||
|
|
||||||
// TODO: Add support for Custom User Roles
|
// TODO: Add support for Custom User Roles
|
||||||
// See: https://bitwarden.com/help/article/user-types-access-control/#custom-role
|
// See: https://bitwarden.com/help/article/user-types-access-control/#custom-role
|
||||||
@@ -475,7 +477,7 @@ impl UserOrganization {
|
|||||||
}}
|
}}
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn find_by_user(user_uuid: &str, conn: &DbConn) -> Vec<Self> {
|
pub fn find_confirmed_by_user(user_uuid: &str, conn: &DbConn) -> Vec<Self> {
|
||||||
db_run! { conn: {
|
db_run! { conn: {
|
||||||
users_organizations::table
|
users_organizations::table
|
||||||
.filter(users_organizations::user_uuid.eq(user_uuid))
|
.filter(users_organizations::user_uuid.eq(user_uuid))
|
||||||
|
|||||||
@@ -232,15 +232,18 @@ impl Send {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn update_users_revision(&self, conn: &DbConn) {
|
pub fn update_users_revision(&self, conn: &DbConn) -> Vec<String> {
|
||||||
|
let mut user_uuids = Vec::new();
|
||||||
match &self.user_uuid {
|
match &self.user_uuid {
|
||||||
Some(user_uuid) => {
|
Some(user_uuid) => {
|
||||||
User::update_uuid_revision(user_uuid, conn);
|
User::update_uuid_revision(user_uuid, conn);
|
||||||
|
user_uuids.push(user_uuid.clone())
|
||||||
}
|
}
|
||||||
None => {
|
None => {
|
||||||
// Belongs to Organization, not implemented
|
// Belongs to Organization, not implemented
|
||||||
}
|
}
|
||||||
}
|
};
|
||||||
|
user_uuids
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn delete_all_by_user(user_uuid: &str, conn: &DbConn) -> EmptyResult {
|
pub fn delete_all_by_user(user_uuid: &str, conn: &DbConn) -> EmptyResult {
|
||||||
|
|||||||
@@ -1,8 +1,6 @@
|
|||||||
use serde_json::Value;
|
use serde_json::Value;
|
||||||
|
|
||||||
use crate::api::EmptyResult;
|
use crate::{api::EmptyResult, db::DbConn, error::MapResult};
|
||||||
use crate::db::DbConn;
|
|
||||||
use crate::error::MapResult;
|
|
||||||
|
|
||||||
use super::User;
|
use super::User;
|
||||||
|
|
||||||
@@ -161,7 +159,6 @@ impl TwoFactor {
|
|||||||
|
|
||||||
use crate::api::core::two_factor::u2f::U2FRegistration;
|
use crate::api::core::two_factor::u2f::U2FRegistration;
|
||||||
use crate::api::core::two_factor::webauthn::{get_webauthn_registrations, WebauthnRegistration};
|
use crate::api::core::two_factor::webauthn::{get_webauthn_registrations, WebauthnRegistration};
|
||||||
use std::convert::TryInto;
|
|
||||||
use webauthn_rs::proto::*;
|
use webauthn_rs::proto::*;
|
||||||
|
|
||||||
for mut u2f in u2f_factors {
|
for mut u2f in u2f_factors {
|
||||||
|
|||||||
108
src/db/models/two_factor_incomplete.rs
Normal file
108
src/db/models/two_factor_incomplete.rs
Normal file
@@ -0,0 +1,108 @@
|
|||||||
|
use chrono::{NaiveDateTime, Utc};
|
||||||
|
|
||||||
|
use crate::{api::EmptyResult, auth::ClientIp, db::DbConn, error::MapResult, CONFIG};
|
||||||
|
|
||||||
|
use super::User;
|
||||||
|
|
||||||
|
db_object! {
|
||||||
|
#[derive(Identifiable, Queryable, Insertable, Associations, AsChangeset)]
|
||||||
|
#[table_name = "twofactor_incomplete"]
|
||||||
|
#[belongs_to(User, foreign_key = "user_uuid")]
|
||||||
|
#[primary_key(user_uuid, device_uuid)]
|
||||||
|
pub struct TwoFactorIncomplete {
|
||||||
|
pub user_uuid: String,
|
||||||
|
// This device UUID is simply what's claimed by the device. It doesn't
|
||||||
|
// necessarily correspond to any UUID in the devices table, since a device
|
||||||
|
// must complete 2FA login before being added into the devices table.
|
||||||
|
pub device_uuid: String,
|
||||||
|
pub device_name: String,
|
||||||
|
pub login_time: NaiveDateTime,
|
||||||
|
pub ip_address: String,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
impl TwoFactorIncomplete {
|
||||||
|
pub fn mark_incomplete(
|
||||||
|
user_uuid: &str,
|
||||||
|
device_uuid: &str,
|
||||||
|
device_name: &str,
|
||||||
|
ip: &ClientIp,
|
||||||
|
conn: &DbConn,
|
||||||
|
) -> EmptyResult {
|
||||||
|
if CONFIG.incomplete_2fa_time_limit() <= 0 || !CONFIG.mail_enabled() {
|
||||||
|
return Ok(());
|
||||||
|
}
|
||||||
|
|
||||||
|
// Don't update the data for an existing user/device pair, since that
|
||||||
|
// would allow an attacker to arbitrarily delay notifications by
|
||||||
|
// sending repeated 2FA attempts to reset the timer.
|
||||||
|
let existing = Self::find_by_user_and_device(user_uuid, device_uuid, conn);
|
||||||
|
if existing.is_some() {
|
||||||
|
return Ok(());
|
||||||
|
}
|
||||||
|
|
||||||
|
db_run! { conn: {
|
||||||
|
diesel::insert_into(twofactor_incomplete::table)
|
||||||
|
.values((
|
||||||
|
twofactor_incomplete::user_uuid.eq(user_uuid),
|
||||||
|
twofactor_incomplete::device_uuid.eq(device_uuid),
|
||||||
|
twofactor_incomplete::device_name.eq(device_name),
|
||||||
|
twofactor_incomplete::login_time.eq(Utc::now().naive_utc()),
|
||||||
|
twofactor_incomplete::ip_address.eq(ip.ip.to_string()),
|
||||||
|
))
|
||||||
|
.execute(conn)
|
||||||
|
.map_res("Error adding twofactor_incomplete record")
|
||||||
|
}}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn mark_complete(user_uuid: &str, device_uuid: &str, conn: &DbConn) -> EmptyResult {
|
||||||
|
if CONFIG.incomplete_2fa_time_limit() <= 0 || !CONFIG.mail_enabled() {
|
||||||
|
return Ok(());
|
||||||
|
}
|
||||||
|
|
||||||
|
Self::delete_by_user_and_device(user_uuid, device_uuid, conn)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn find_by_user_and_device(user_uuid: &str, device_uuid: &str, conn: &DbConn) -> Option<Self> {
|
||||||
|
db_run! { conn: {
|
||||||
|
twofactor_incomplete::table
|
||||||
|
.filter(twofactor_incomplete::user_uuid.eq(user_uuid))
|
||||||
|
.filter(twofactor_incomplete::device_uuid.eq(device_uuid))
|
||||||
|
.first::<TwoFactorIncompleteDb>(conn)
|
||||||
|
.ok()
|
||||||
|
.from_db()
|
||||||
|
}}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn find_logins_before(dt: &NaiveDateTime, conn: &DbConn) -> Vec<Self> {
|
||||||
|
db_run! {conn: {
|
||||||
|
twofactor_incomplete::table
|
||||||
|
.filter(twofactor_incomplete::login_time.lt(dt))
|
||||||
|
.load::<TwoFactorIncompleteDb>(conn)
|
||||||
|
.expect("Error loading twofactor_incomplete")
|
||||||
|
.from_db()
|
||||||
|
}}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn delete(self, conn: &DbConn) -> EmptyResult {
|
||||||
|
Self::delete_by_user_and_device(&self.user_uuid, &self.device_uuid, conn)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn delete_by_user_and_device(user_uuid: &str, device_uuid: &str, conn: &DbConn) -> EmptyResult {
|
||||||
|
db_run! { conn: {
|
||||||
|
diesel::delete(twofactor_incomplete::table
|
||||||
|
.filter(twofactor_incomplete::user_uuid.eq(user_uuid))
|
||||||
|
.filter(twofactor_incomplete::device_uuid.eq(device_uuid)))
|
||||||
|
.execute(conn)
|
||||||
|
.map_res("Error in twofactor_incomplete::delete_by_user_and_device()")
|
||||||
|
}}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn delete_all_by_user(user_uuid: &str, conn: &DbConn) -> EmptyResult {
|
||||||
|
db_run! { conn: {
|
||||||
|
diesel::delete(twofactor_incomplete::table.filter(twofactor_incomplete::user_uuid.eq(user_uuid)))
|
||||||
|
.execute(conn)
|
||||||
|
.map_res("Error in twofactor_incomplete::delete_all_by_user()")
|
||||||
|
}}
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -73,9 +73,9 @@ impl User {
|
|||||||
pub const CLIENT_KDF_TYPE_DEFAULT: i32 = 0; // PBKDF2: 0
|
pub const CLIENT_KDF_TYPE_DEFAULT: i32 = 0; // PBKDF2: 0
|
||||||
pub const CLIENT_KDF_ITER_DEFAULT: i32 = 100_000;
|
pub const CLIENT_KDF_ITER_DEFAULT: i32 = 100_000;
|
||||||
|
|
||||||
pub fn new(mail: String) -> Self {
|
pub fn new(email: String) -> Self {
|
||||||
let now = Utc::now().naive_utc();
|
let now = Utc::now().naive_utc();
|
||||||
let email = mail.to_lowercase();
|
let email = email.to_lowercase();
|
||||||
|
|
||||||
Self {
|
Self {
|
||||||
uuid: crate::util::get_uuid(),
|
uuid: crate::util::get_uuid(),
|
||||||
@@ -176,7 +176,10 @@ impl User {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
use super::{Cipher, Device, Favorite, Folder, Send, TwoFactor, UserOrgType, UserOrganization};
|
use super::{
|
||||||
|
Cipher, Device, EmergencyAccess, Favorite, Folder, Send, TwoFactor, TwoFactorIncomplete, UserOrgType,
|
||||||
|
UserOrganization,
|
||||||
|
};
|
||||||
use crate::db::DbConn;
|
use crate::db::DbConn;
|
||||||
|
|
||||||
use crate::api::EmptyResult;
|
use crate::api::EmptyResult;
|
||||||
@@ -185,7 +188,7 @@ use crate::error::MapResult;
|
|||||||
/// Database methods
|
/// Database methods
|
||||||
impl User {
|
impl User {
|
||||||
pub fn to_json(&self, conn: &DbConn) -> Value {
|
pub fn to_json(&self, conn: &DbConn) -> Value {
|
||||||
let orgs = UserOrganization::find_by_user(&self.uuid, conn);
|
let orgs = UserOrganization::find_confirmed_by_user(&self.uuid, conn);
|
||||||
let orgs_json: Vec<Value> = orgs.iter().map(|c| c.to_json(conn)).collect();
|
let orgs_json: Vec<Value> = orgs.iter().map(|c| c.to_json(conn)).collect();
|
||||||
let twofactor_enabled = !TwoFactor::find_by_user(&self.uuid, conn).is_empty();
|
let twofactor_enabled = !TwoFactor::find_by_user(&self.uuid, conn).is_empty();
|
||||||
|
|
||||||
@@ -210,7 +213,10 @@ impl User {
|
|||||||
"PrivateKey": self.private_key,
|
"PrivateKey": self.private_key,
|
||||||
"SecurityStamp": self.security_stamp,
|
"SecurityStamp": self.security_stamp,
|
||||||
"Organizations": orgs_json,
|
"Organizations": orgs_json,
|
||||||
"Object": "profile"
|
"Providers": [],
|
||||||
|
"ProviderOrganizations": [],
|
||||||
|
"ForcePasswordReset": false,
|
||||||
|
"Object": "profile",
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -253,7 +259,7 @@ impl User {
|
|||||||
}
|
}
|
||||||
|
|
||||||
pub fn delete(self, conn: &DbConn) -> EmptyResult {
|
pub fn delete(self, conn: &DbConn) -> EmptyResult {
|
||||||
for user_org in UserOrganization::find_by_user(&self.uuid, conn) {
|
for user_org in UserOrganization::find_confirmed_by_user(&self.uuid, conn) {
|
||||||
if user_org.atype == UserOrgType::Owner {
|
if user_org.atype == UserOrgType::Owner {
|
||||||
let owner_type = UserOrgType::Owner as i32;
|
let owner_type = UserOrgType::Owner as i32;
|
||||||
if UserOrganization::find_by_org_and_type(&user_org.org_uuid, owner_type, conn).len() <= 1 {
|
if UserOrganization::find_by_org_and_type(&user_org.org_uuid, owner_type, conn).len() <= 1 {
|
||||||
@@ -263,12 +269,14 @@ impl User {
|
|||||||
}
|
}
|
||||||
|
|
||||||
Send::delete_all_by_user(&self.uuid, conn)?;
|
Send::delete_all_by_user(&self.uuid, conn)?;
|
||||||
|
EmergencyAccess::delete_all_by_user(&self.uuid, conn)?;
|
||||||
UserOrganization::delete_all_by_user(&self.uuid, conn)?;
|
UserOrganization::delete_all_by_user(&self.uuid, conn)?;
|
||||||
Cipher::delete_all_by_user(&self.uuid, conn)?;
|
Cipher::delete_all_by_user(&self.uuid, conn)?;
|
||||||
Favorite::delete_all_by_user(&self.uuid, conn)?;
|
Favorite::delete_all_by_user(&self.uuid, conn)?;
|
||||||
Folder::delete_all_by_user(&self.uuid, conn)?;
|
Folder::delete_all_by_user(&self.uuid, conn)?;
|
||||||
Device::delete_all_by_user(&self.uuid, conn)?;
|
Device::delete_all_by_user(&self.uuid, conn)?;
|
||||||
TwoFactor::delete_all_by_user(&self.uuid, conn)?;
|
TwoFactor::delete_all_by_user(&self.uuid, conn)?;
|
||||||
|
TwoFactorIncomplete::delete_all_by_user(&self.uuid, conn)?;
|
||||||
Invitation::take(&self.email, conn); // Delete invitation if any
|
Invitation::take(&self.email, conn); // Delete invitation if any
|
||||||
|
|
||||||
db_run! {conn: {
|
db_run! {conn: {
|
||||||
@@ -346,7 +354,8 @@ impl User {
|
|||||||
}
|
}
|
||||||
|
|
||||||
impl Invitation {
|
impl Invitation {
|
||||||
pub const fn new(email: String) -> Self {
|
pub fn new(email: String) -> Self {
|
||||||
|
let email = email.to_lowercase();
|
||||||
Self {
|
Self {
|
||||||
email,
|
email,
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -140,6 +140,16 @@ table! {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
table! {
|
||||||
|
twofactor_incomplete (user_uuid, device_uuid) {
|
||||||
|
user_uuid -> Text,
|
||||||
|
device_uuid -> Text,
|
||||||
|
device_name -> Text,
|
||||||
|
login_time -> Timestamp,
|
||||||
|
ip_address -> Text,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
table! {
|
table! {
|
||||||
users (uuid) {
|
users (uuid) {
|
||||||
uuid -> Text,
|
uuid -> Text,
|
||||||
@@ -192,6 +202,23 @@ table! {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
table! {
|
||||||
|
emergency_access (uuid) {
|
||||||
|
uuid -> Text,
|
||||||
|
grantor_uuid -> Text,
|
||||||
|
grantee_uuid -> Nullable<Text>,
|
||||||
|
email -> Nullable<Text>,
|
||||||
|
key_encrypted -> Nullable<Text>,
|
||||||
|
atype -> Integer,
|
||||||
|
status -> Integer,
|
||||||
|
wait_time_days -> Integer,
|
||||||
|
recovery_initiated_at -> Nullable<Timestamp>,
|
||||||
|
last_notification_at -> Nullable<Timestamp>,
|
||||||
|
updated_at -> Timestamp,
|
||||||
|
created_at -> Timestamp,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
joinable!(attachments -> ciphers (cipher_uuid));
|
joinable!(attachments -> ciphers (cipher_uuid));
|
||||||
joinable!(ciphers -> organizations (organization_uuid));
|
joinable!(ciphers -> organizations (organization_uuid));
|
||||||
joinable!(ciphers -> users (user_uuid));
|
joinable!(ciphers -> users (user_uuid));
|
||||||
@@ -210,6 +237,7 @@ joinable!(users_collections -> collections (collection_uuid));
|
|||||||
joinable!(users_collections -> users (user_uuid));
|
joinable!(users_collections -> users (user_uuid));
|
||||||
joinable!(users_organizations -> organizations (org_uuid));
|
joinable!(users_organizations -> organizations (org_uuid));
|
||||||
joinable!(users_organizations -> users (user_uuid));
|
joinable!(users_organizations -> users (user_uuid));
|
||||||
|
joinable!(emergency_access -> users (grantor_uuid));
|
||||||
|
|
||||||
allow_tables_to_appear_in_same_query!(
|
allow_tables_to_appear_in_same_query!(
|
||||||
attachments,
|
attachments,
|
||||||
@@ -227,4 +255,5 @@ allow_tables_to_appear_in_same_query!(
|
|||||||
users,
|
users,
|
||||||
users_collections,
|
users_collections,
|
||||||
users_organizations,
|
users_organizations,
|
||||||
|
emergency_access,
|
||||||
);
|
);
|
||||||
|
|||||||
@@ -140,6 +140,16 @@ table! {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
table! {
|
||||||
|
twofactor_incomplete (user_uuid, device_uuid) {
|
||||||
|
user_uuid -> Text,
|
||||||
|
device_uuid -> Text,
|
||||||
|
device_name -> Text,
|
||||||
|
login_time -> Timestamp,
|
||||||
|
ip_address -> Text,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
table! {
|
table! {
|
||||||
users (uuid) {
|
users (uuid) {
|
||||||
uuid -> Text,
|
uuid -> Text,
|
||||||
@@ -192,6 +202,23 @@ table! {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
table! {
|
||||||
|
emergency_access (uuid) {
|
||||||
|
uuid -> Text,
|
||||||
|
grantor_uuid -> Text,
|
||||||
|
grantee_uuid -> Nullable<Text>,
|
||||||
|
email -> Nullable<Text>,
|
||||||
|
key_encrypted -> Nullable<Text>,
|
||||||
|
atype -> Integer,
|
||||||
|
status -> Integer,
|
||||||
|
wait_time_days -> Integer,
|
||||||
|
recovery_initiated_at -> Nullable<Timestamp>,
|
||||||
|
last_notification_at -> Nullable<Timestamp>,
|
||||||
|
updated_at -> Timestamp,
|
||||||
|
created_at -> Timestamp,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
joinable!(attachments -> ciphers (cipher_uuid));
|
joinable!(attachments -> ciphers (cipher_uuid));
|
||||||
joinable!(ciphers -> organizations (organization_uuid));
|
joinable!(ciphers -> organizations (organization_uuid));
|
||||||
joinable!(ciphers -> users (user_uuid));
|
joinable!(ciphers -> users (user_uuid));
|
||||||
@@ -210,6 +237,7 @@ joinable!(users_collections -> collections (collection_uuid));
|
|||||||
joinable!(users_collections -> users (user_uuid));
|
joinable!(users_collections -> users (user_uuid));
|
||||||
joinable!(users_organizations -> organizations (org_uuid));
|
joinable!(users_organizations -> organizations (org_uuid));
|
||||||
joinable!(users_organizations -> users (user_uuid));
|
joinable!(users_organizations -> users (user_uuid));
|
||||||
|
joinable!(emergency_access -> users (grantor_uuid));
|
||||||
|
|
||||||
allow_tables_to_appear_in_same_query!(
|
allow_tables_to_appear_in_same_query!(
|
||||||
attachments,
|
attachments,
|
||||||
@@ -227,4 +255,5 @@ allow_tables_to_appear_in_same_query!(
|
|||||||
users,
|
users,
|
||||||
users_collections,
|
users_collections,
|
||||||
users_organizations,
|
users_organizations,
|
||||||
|
emergency_access,
|
||||||
);
|
);
|
||||||
|
|||||||
@@ -140,6 +140,16 @@ table! {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
table! {
|
||||||
|
twofactor_incomplete (user_uuid, device_uuid) {
|
||||||
|
user_uuid -> Text,
|
||||||
|
device_uuid -> Text,
|
||||||
|
device_name -> Text,
|
||||||
|
login_time -> Timestamp,
|
||||||
|
ip_address -> Text,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
table! {
|
table! {
|
||||||
users (uuid) {
|
users (uuid) {
|
||||||
uuid -> Text,
|
uuid -> Text,
|
||||||
@@ -192,6 +202,23 @@ table! {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
table! {
|
||||||
|
emergency_access (uuid) {
|
||||||
|
uuid -> Text,
|
||||||
|
grantor_uuid -> Text,
|
||||||
|
grantee_uuid -> Nullable<Text>,
|
||||||
|
email -> Nullable<Text>,
|
||||||
|
key_encrypted -> Nullable<Text>,
|
||||||
|
atype -> Integer,
|
||||||
|
status -> Integer,
|
||||||
|
wait_time_days -> Integer,
|
||||||
|
recovery_initiated_at -> Nullable<Timestamp>,
|
||||||
|
last_notification_at -> Nullable<Timestamp>,
|
||||||
|
updated_at -> Timestamp,
|
||||||
|
created_at -> Timestamp,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
joinable!(attachments -> ciphers (cipher_uuid));
|
joinable!(attachments -> ciphers (cipher_uuid));
|
||||||
joinable!(ciphers -> organizations (organization_uuid));
|
joinable!(ciphers -> organizations (organization_uuid));
|
||||||
joinable!(ciphers -> users (user_uuid));
|
joinable!(ciphers -> users (user_uuid));
|
||||||
@@ -210,6 +237,7 @@ joinable!(users_collections -> collections (collection_uuid));
|
|||||||
joinable!(users_collections -> users (user_uuid));
|
joinable!(users_collections -> users (user_uuid));
|
||||||
joinable!(users_organizations -> organizations (org_uuid));
|
joinable!(users_organizations -> organizations (org_uuid));
|
||||||
joinable!(users_organizations -> users (user_uuid));
|
joinable!(users_organizations -> users (user_uuid));
|
||||||
|
joinable!(emergency_access -> users (grantor_uuid));
|
||||||
|
|
||||||
allow_tables_to_appear_in_same_query!(
|
allow_tables_to_appear_in_same_query!(
|
||||||
attachments,
|
attachments,
|
||||||
@@ -227,4 +255,5 @@ allow_tables_to_appear_in_same_query!(
|
|||||||
users,
|
users,
|
||||||
users_collections,
|
users_collections,
|
||||||
users_organizations,
|
users_organizations,
|
||||||
|
emergency_access,
|
||||||
);
|
);
|
||||||
|
|||||||
11
src/error.rs
11
src/error.rs
@@ -73,7 +73,7 @@ make_error! {
|
|||||||
Serde(SerdeErr): _has_source, _api_error,
|
Serde(SerdeErr): _has_source, _api_error,
|
||||||
JWt(JwtErr): _has_source, _api_error,
|
JWt(JwtErr): _has_source, _api_error,
|
||||||
Handlebars(HbErr): _has_source, _api_error,
|
Handlebars(HbErr): _has_source, _api_error,
|
||||||
//WsError(ws::Error): _has_source, _api_error,
|
|
||||||
Io(IoErr): _has_source, _api_error,
|
Io(IoErr): _has_source, _api_error,
|
||||||
Time(TimeErr): _has_source, _api_error,
|
Time(TimeErr): _has_source, _api_error,
|
||||||
Req(ReqErr): _has_source, _api_error,
|
Req(ReqErr): _has_source, _api_error,
|
||||||
@@ -220,6 +220,15 @@ macro_rules! err {
|
|||||||
}};
|
}};
|
||||||
}
|
}
|
||||||
|
|
||||||
|
macro_rules! err_silent {
|
||||||
|
($msg:expr) => {{
|
||||||
|
return Err(crate::error::Error::new($msg, $msg));
|
||||||
|
}};
|
||||||
|
($usr_msg:expr, $log_value:expr) => {{
|
||||||
|
return Err(crate::error::Error::new($usr_msg, $log_value));
|
||||||
|
}};
|
||||||
|
}
|
||||||
|
|
||||||
#[macro_export]
|
#[macro_export]
|
||||||
macro_rules! err_code {
|
macro_rules! err_code {
|
||||||
($msg:expr, $err_code: expr) => {{
|
($msg:expr, $err_code: expr) => {{
|
||||||
|
|||||||
193
src/mail.rs
193
src/mail.rs
@@ -1,6 +1,6 @@
|
|||||||
use std::str::FromStr;
|
use std::str::FromStr;
|
||||||
|
|
||||||
use chrono::{DateTime, Local};
|
use chrono::NaiveDateTime;
|
||||||
use percent_encoding::{percent_encode, NON_ALPHANUMERIC};
|
use percent_encoding::{percent_encode, NON_ALPHANUMERIC};
|
||||||
|
|
||||||
use lettre::{
|
use lettre::{
|
||||||
@@ -13,7 +13,10 @@ use lettre::{
|
|||||||
|
|
||||||
use crate::{
|
use crate::{
|
||||||
api::EmptyResult,
|
api::EmptyResult,
|
||||||
auth::{encode_jwt, generate_delete_claims, generate_invite_claims, generate_verify_email_claims},
|
auth::{
|
||||||
|
encode_jwt, generate_delete_claims, generate_emergency_access_invite_claims, generate_invite_claims,
|
||||||
|
generate_verify_email_claims,
|
||||||
|
},
|
||||||
error::Error,
|
error::Error,
|
||||||
CONFIG,
|
CONFIG,
|
||||||
};
|
};
|
||||||
@@ -192,6 +195,18 @@ pub fn send_2fa_removed_from_org(address: &str, org_name: &str) -> EmptyResult {
|
|||||||
send_email(address, &subject, body_html, body_text)
|
send_email(address, &subject, body_html, body_text)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
pub fn send_single_org_removed_from_org(address: &str, org_name: &str) -> EmptyResult {
|
||||||
|
let (subject, body_html, body_text) = get_text(
|
||||||
|
"email/send_single_org_removed_from_org",
|
||||||
|
json!({
|
||||||
|
"url": CONFIG.domain(),
|
||||||
|
"org_name": org_name,
|
||||||
|
}),
|
||||||
|
)?;
|
||||||
|
|
||||||
|
send_email(address, &subject, body_html, body_text)
|
||||||
|
}
|
||||||
|
|
||||||
pub fn send_invite(
|
pub fn send_invite(
|
||||||
address: &str,
|
address: &str,
|
||||||
uuid: &str,
|
uuid: &str,
|
||||||
@@ -224,6 +239,136 @@ pub fn send_invite(
|
|||||||
send_email(address, &subject, body_html, body_text)
|
send_email(address, &subject, body_html, body_text)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
pub fn send_emergency_access_invite(
|
||||||
|
address: &str,
|
||||||
|
uuid: &str,
|
||||||
|
emer_id: Option<String>,
|
||||||
|
grantor_name: Option<String>,
|
||||||
|
grantor_email: Option<String>,
|
||||||
|
) -> EmptyResult {
|
||||||
|
let claims = generate_emergency_access_invite_claims(
|
||||||
|
uuid.to_string(),
|
||||||
|
String::from(address),
|
||||||
|
emer_id.clone(),
|
||||||
|
grantor_name.clone(),
|
||||||
|
grantor_email,
|
||||||
|
);
|
||||||
|
|
||||||
|
let invite_token = encode_jwt(&claims);
|
||||||
|
|
||||||
|
let (subject, body_html, body_text) = get_text(
|
||||||
|
"email/send_emergency_access_invite",
|
||||||
|
json!({
|
||||||
|
"url": CONFIG.domain(),
|
||||||
|
"emer_id": emer_id.unwrap_or_else(|| "_".to_string()),
|
||||||
|
"email": percent_encode(address.as_bytes(), NON_ALPHANUMERIC).to_string(),
|
||||||
|
"grantor_name": grantor_name,
|
||||||
|
"token": invite_token,
|
||||||
|
}),
|
||||||
|
)?;
|
||||||
|
|
||||||
|
send_email(address, &subject, body_html, body_text)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn send_emergency_access_invite_accepted(address: &str, grantee_email: &str) -> EmptyResult {
|
||||||
|
let (subject, body_html, body_text) = get_text(
|
||||||
|
"email/emergency_access_invite_accepted",
|
||||||
|
json!({
|
||||||
|
"url": CONFIG.domain(),
|
||||||
|
"grantee_email": grantee_email,
|
||||||
|
}),
|
||||||
|
)?;
|
||||||
|
|
||||||
|
send_email(address, &subject, body_html, body_text)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn send_emergency_access_invite_confirmed(address: &str, grantor_name: &str) -> EmptyResult {
|
||||||
|
let (subject, body_html, body_text) = get_text(
|
||||||
|
"email/emergency_access_invite_confirmed",
|
||||||
|
json!({
|
||||||
|
"url": CONFIG.domain(),
|
||||||
|
"grantor_name": grantor_name,
|
||||||
|
}),
|
||||||
|
)?;
|
||||||
|
|
||||||
|
send_email(address, &subject, body_html, body_text)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn send_emergency_access_recovery_approved(address: &str, grantor_name: &str) -> EmptyResult {
|
||||||
|
let (subject, body_html, body_text) = get_text(
|
||||||
|
"email/emergency_access_recovery_approved",
|
||||||
|
json!({
|
||||||
|
"url": CONFIG.domain(),
|
||||||
|
"grantor_name": grantor_name,
|
||||||
|
}),
|
||||||
|
)?;
|
||||||
|
|
||||||
|
send_email(address, &subject, body_html, body_text)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn send_emergency_access_recovery_initiated(
|
||||||
|
address: &str,
|
||||||
|
grantee_name: &str,
|
||||||
|
atype: &str,
|
||||||
|
wait_time_days: &str,
|
||||||
|
) -> EmptyResult {
|
||||||
|
let (subject, body_html, body_text) = get_text(
|
||||||
|
"email/emergency_access_recovery_initiated",
|
||||||
|
json!({
|
||||||
|
"url": CONFIG.domain(),
|
||||||
|
"grantee_name": grantee_name,
|
||||||
|
"atype": atype,
|
||||||
|
"wait_time_days": wait_time_days,
|
||||||
|
}),
|
||||||
|
)?;
|
||||||
|
|
||||||
|
send_email(address, &subject, body_html, body_text)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn send_emergency_access_recovery_reminder(
|
||||||
|
address: &str,
|
||||||
|
grantee_name: &str,
|
||||||
|
atype: &str,
|
||||||
|
days_left: &str,
|
||||||
|
) -> EmptyResult {
|
||||||
|
let (subject, body_html, body_text) = get_text(
|
||||||
|
"email/emergency_access_recovery_reminder",
|
||||||
|
json!({
|
||||||
|
"url": CONFIG.domain(),
|
||||||
|
"grantee_name": grantee_name,
|
||||||
|
"atype": atype,
|
||||||
|
"days_left": days_left,
|
||||||
|
}),
|
||||||
|
)?;
|
||||||
|
|
||||||
|
send_email(address, &subject, body_html, body_text)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn send_emergency_access_recovery_rejected(address: &str, grantor_name: &str) -> EmptyResult {
|
||||||
|
let (subject, body_html, body_text) = get_text(
|
||||||
|
"email/emergency_access_recovery_rejected",
|
||||||
|
json!({
|
||||||
|
"url": CONFIG.domain(),
|
||||||
|
"grantor_name": grantor_name,
|
||||||
|
}),
|
||||||
|
)?;
|
||||||
|
|
||||||
|
send_email(address, &subject, body_html, body_text)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn send_emergency_access_recovery_timed_out(address: &str, grantee_name: &str, atype: &str) -> EmptyResult {
|
||||||
|
let (subject, body_html, body_text) = get_text(
|
||||||
|
"email/emergency_access_recovery_timed_out",
|
||||||
|
json!({
|
||||||
|
"url": CONFIG.domain(),
|
||||||
|
"grantee_name": grantee_name,
|
||||||
|
"atype": atype,
|
||||||
|
}),
|
||||||
|
)?;
|
||||||
|
|
||||||
|
send_email(address, &subject, body_html, body_text)
|
||||||
|
}
|
||||||
|
|
||||||
pub fn send_invite_accepted(new_user_email: &str, address: &str, org_name: &str) -> EmptyResult {
|
pub fn send_invite_accepted(new_user_email: &str, address: &str, org_name: &str) -> EmptyResult {
|
||||||
let (subject, body_html, body_text) = get_text(
|
let (subject, body_html, body_text) = get_text(
|
||||||
"email/invite_accepted",
|
"email/invite_accepted",
|
||||||
@@ -249,7 +394,7 @@ pub fn send_invite_confirmed(address: &str, org_name: &str) -> EmptyResult {
|
|||||||
send_email(address, &subject, body_html, body_text)
|
send_email(address, &subject, body_html, body_text)
|
||||||
}
|
}
|
||||||
|
|
||||||
pub fn send_new_device_logged_in(address: &str, ip: &str, dt: &DateTime<Local>, device: &str) -> EmptyResult {
|
pub fn send_new_device_logged_in(address: &str, ip: &str, dt: &NaiveDateTime, device: &str) -> EmptyResult {
|
||||||
use crate::util::upcase_first;
|
use crate::util::upcase_first;
|
||||||
let device = upcase_first(device);
|
let device = upcase_first(device);
|
||||||
|
|
||||||
@@ -260,7 +405,26 @@ pub fn send_new_device_logged_in(address: &str, ip: &str, dt: &DateTime<Local>,
|
|||||||
"url": CONFIG.domain(),
|
"url": CONFIG.domain(),
|
||||||
"ip": ip,
|
"ip": ip,
|
||||||
"device": device,
|
"device": device,
|
||||||
"datetime": crate::util::format_datetime_local(dt, fmt),
|
"datetime": crate::util::format_naive_datetime_local(dt, fmt),
|
||||||
|
}),
|
||||||
|
)?;
|
||||||
|
|
||||||
|
send_email(address, &subject, body_html, body_text)
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn send_incomplete_2fa_login(address: &str, ip: &str, dt: &NaiveDateTime, device: &str) -> EmptyResult {
|
||||||
|
use crate::util::upcase_first;
|
||||||
|
let device = upcase_first(device);
|
||||||
|
|
||||||
|
let fmt = "%A, %B %_d, %Y at %r %Z";
|
||||||
|
let (subject, body_html, body_text) = get_text(
|
||||||
|
"email/incomplete_2fa_login",
|
||||||
|
json!({
|
||||||
|
"url": CONFIG.domain(),
|
||||||
|
"ip": ip,
|
||||||
|
"device": device,
|
||||||
|
"datetime": crate::util::format_naive_datetime_local(dt, fmt),
|
||||||
|
"time_limit": CONFIG.incomplete_2fa_time_limit(),
|
||||||
}),
|
}),
|
||||||
)?;
|
)?;
|
||||||
|
|
||||||
@@ -340,15 +504,28 @@ fn send_email(address: &str, subject: &str, body_html: String, body_text: String
|
|||||||
// Match some common errors and make them more user friendly
|
// Match some common errors and make them more user friendly
|
||||||
Err(e) => {
|
Err(e) => {
|
||||||
if e.is_client() {
|
if e.is_client() {
|
||||||
|
debug!("SMTP Client error: {:#?}", e);
|
||||||
err!(format!("SMTP Client error: {}", e));
|
err!(format!("SMTP Client error: {}", e));
|
||||||
} else if e.is_transient() {
|
} else if e.is_transient() {
|
||||||
err!(format!("SMTP 4xx error: {:?}", e));
|
debug!("SMTP 4xx error: {:#?}", e);
|
||||||
|
err!(format!("SMTP 4xx error: {}", e));
|
||||||
} else if e.is_permanent() {
|
} else if e.is_permanent() {
|
||||||
err!(format!("SMTP 5xx error: {:?}", e));
|
debug!("SMTP 5xx error: {:#?}", e);
|
||||||
|
let mut msg = e.to_string();
|
||||||
|
// Add a special check for 535 to add a more descriptive message
|
||||||
|
if msg.contains("(535)") {
|
||||||
|
msg = format!("{} - Authentication credentials invalid", msg);
|
||||||
|
}
|
||||||
|
err!(format!("SMTP 5xx error: {}", msg));
|
||||||
} else if e.is_timeout() {
|
} else if e.is_timeout() {
|
||||||
err!(format!("SMTP timeout error: {:?}", e));
|
debug!("SMTP timeout error: {:#?}", e);
|
||||||
|
err!(format!("SMTP timeout error: {}", e));
|
||||||
|
} else if e.is_tls() {
|
||||||
|
debug!("SMTP Encryption error: {:#?}", e);
|
||||||
|
err!(format!("SMTP Encryption error: {}", e));
|
||||||
} else {
|
} else {
|
||||||
Err(e.into())
|
debug!("SMTP {:#?}", e);
|
||||||
|
err!(format!("SMTP {}", e));
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
45
src/main.rs
45
src/main.rs
@@ -1,6 +1,10 @@
|
|||||||
#![forbid(unsafe_code)]
|
#![forbid(unsafe_code)]
|
||||||
#![cfg_attr(feature = "unstable", feature(ip))]
|
#![cfg_attr(feature = "unstable", feature(ip))]
|
||||||
#![recursion_limit = "512"]
|
// The recursion_limit is mainly triggered by the json!() macro.
|
||||||
|
// The more key/value pairs there are the more recursion occurs.
|
||||||
|
// We want to keep this as low as possible, but not higher then 128.
|
||||||
|
// If you go above 128 it will cause rust-analyzer to fail,
|
||||||
|
#![recursion_limit = "87"]
|
||||||
|
|
||||||
extern crate openssl;
|
extern crate openssl;
|
||||||
#[macro_use]
|
#[macro_use]
|
||||||
@@ -104,6 +108,14 @@ fn launch_info() {
|
|||||||
}
|
}
|
||||||
|
|
||||||
fn init_logging(level: log::LevelFilter) -> Result<(), fern::InitError> {
|
fn init_logging(level: log::LevelFilter) -> Result<(), fern::InitError> {
|
||||||
|
// Depending on the main log level we either want to disable or enable logging for trust-dns.
|
||||||
|
// Else if there are timeouts it will clutter the logs since trust-dns uses warn for this.
|
||||||
|
let trust_dns_level = if level >= log::LevelFilter::Debug {
|
||||||
|
level
|
||||||
|
} else {
|
||||||
|
log::LevelFilter::Off
|
||||||
|
};
|
||||||
|
|
||||||
let mut logger = fern::Dispatch::new()
|
let mut logger = fern::Dispatch::new()
|
||||||
.level(level)
|
.level(level)
|
||||||
// Hide unknown certificate errors if using self-signed
|
// Hide unknown certificate errors if using self-signed
|
||||||
@@ -122,6 +134,8 @@ fn init_logging(level: log::LevelFilter) -> Result<(), fern::InitError> {
|
|||||||
.level_for("hyper::client", log::LevelFilter::Off)
|
.level_for("hyper::client", log::LevelFilter::Off)
|
||||||
// Prevent cookie_store logs
|
// Prevent cookie_store logs
|
||||||
.level_for("cookie_store", log::LevelFilter::Off)
|
.level_for("cookie_store", log::LevelFilter::Off)
|
||||||
|
// Variable level for trust-dns used by reqwest
|
||||||
|
.level_for("trust_dns_proto", trust_dns_level)
|
||||||
.chain(std::io::stdout());
|
.chain(std::io::stdout());
|
||||||
|
|
||||||
// Enable smtp debug logging only specifically for smtp when need.
|
// Enable smtp debug logging only specifically for smtp when need.
|
||||||
@@ -345,11 +359,40 @@ fn schedule_jobs(pool: db::DbPool) {
|
|||||||
}));
|
}));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Send email notifications about incomplete 2FA logins, which potentially
|
||||||
|
// indicates that a user's master password has been compromised.
|
||||||
|
if !CONFIG.incomplete_2fa_schedule().is_empty() {
|
||||||
|
sched.add(Job::new(CONFIG.incomplete_2fa_schedule().parse().unwrap(), || {
|
||||||
|
api::send_incomplete_2fa_notifications(pool.clone());
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
|
||||||
|
// Grant emergency access requests that have met the required wait time.
|
||||||
|
// This job should run before the emergency access reminders job to avoid
|
||||||
|
// sending reminders for requests that are about to be granted anyway.
|
||||||
|
if !CONFIG.emergency_request_timeout_schedule().is_empty() {
|
||||||
|
sched.add(Job::new(CONFIG.emergency_request_timeout_schedule().parse().unwrap(), || {
|
||||||
|
api::emergency_request_timeout_job(pool.clone());
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
|
||||||
|
// Send reminders to emergency access grantors that there are pending
|
||||||
|
// emergency access requests.
|
||||||
|
if !CONFIG.emergency_notification_reminder_schedule().is_empty() {
|
||||||
|
sched.add(Job::new(CONFIG.emergency_notification_reminder_schedule().parse().unwrap(), || {
|
||||||
|
api::emergency_notification_reminder_job(pool.clone());
|
||||||
|
}));
|
||||||
|
}
|
||||||
|
|
||||||
// Periodically check for jobs to run. We probably won't need any
|
// Periodically check for jobs to run. We probably won't need any
|
||||||
// jobs that run more often than once a minute, so a default poll
|
// jobs that run more often than once a minute, so a default poll
|
||||||
// interval of 30 seconds should be sufficient. Users who want to
|
// interval of 30 seconds should be sufficient. Users who want to
|
||||||
// schedule jobs to run more frequently for some reason can reduce
|
// schedule jobs to run more frequently for some reason can reduce
|
||||||
// the poll interval accordingly.
|
// the poll interval accordingly.
|
||||||
|
//
|
||||||
|
// Note that the scheduler checks jobs in the order in which they
|
||||||
|
// were added, so if two jobs are both eligible to run at a given
|
||||||
|
// tick, the one that was added earlier will run first.
|
||||||
loop {
|
loop {
|
||||||
sched.tick();
|
sched.tick();
|
||||||
thread::sleep(Duration::from_millis(CONFIG.job_poll_interval_ms()));
|
thread::sleep(Duration::from_millis(CONFIG.job_poll_interval_ms()));
|
||||||
|
|||||||
477
src/static/scripts/bootstrap-native.js
vendored
477
src/static/scripts/bootstrap-native.js
vendored
File diff suppressed because it is too large
Load Diff
790
src/static/scripts/bootstrap.css
vendored
790
src/static/scripts/bootstrap.css
vendored
File diff suppressed because it is too large
Load Diff
110
src/static/scripts/datatables.css
vendored
110
src/static/scripts/datatables.css
vendored
@@ -4,13 +4,94 @@
|
|||||||
*
|
*
|
||||||
* To rebuild or modify this file with the latest versions of the included
|
* To rebuild or modify this file with the latest versions of the included
|
||||||
* software please visit:
|
* software please visit:
|
||||||
* https://datatables.net/download/#bs5/dt-1.10.25
|
* https://datatables.net/download/#bs5/dt-1.11.3
|
||||||
*
|
*
|
||||||
* Included libraries:
|
* Included libraries:
|
||||||
* DataTables 1.10.25
|
* DataTables 1.11.3
|
||||||
*/
|
*/
|
||||||
|
|
||||||
@charset "UTF-8";
|
@charset "UTF-8";
|
||||||
|
td.dt-control {
|
||||||
|
background: url("https://www.datatables.net/examples/resources/details_open.png") no-repeat center center;
|
||||||
|
cursor: pointer;
|
||||||
|
}
|
||||||
|
|
||||||
|
tr.dt-hasChild td.dt-control {
|
||||||
|
background: url("https://www.datatables.net/examples/resources/details_close.png") no-repeat center center;
|
||||||
|
}
|
||||||
|
|
||||||
|
table.dataTable th.dt-left,
|
||||||
|
table.dataTable td.dt-left {
|
||||||
|
text-align: left;
|
||||||
|
}
|
||||||
|
table.dataTable th.dt-center,
|
||||||
|
table.dataTable td.dt-center,
|
||||||
|
table.dataTable td.dataTables_empty {
|
||||||
|
text-align: center;
|
||||||
|
}
|
||||||
|
table.dataTable th.dt-right,
|
||||||
|
table.dataTable td.dt-right {
|
||||||
|
text-align: right;
|
||||||
|
}
|
||||||
|
table.dataTable th.dt-justify,
|
||||||
|
table.dataTable td.dt-justify {
|
||||||
|
text-align: justify;
|
||||||
|
}
|
||||||
|
table.dataTable th.dt-nowrap,
|
||||||
|
table.dataTable td.dt-nowrap {
|
||||||
|
white-space: nowrap;
|
||||||
|
}
|
||||||
|
table.dataTable thead th.dt-head-left,
|
||||||
|
table.dataTable thead td.dt-head-left,
|
||||||
|
table.dataTable tfoot th.dt-head-left,
|
||||||
|
table.dataTable tfoot td.dt-head-left {
|
||||||
|
text-align: left;
|
||||||
|
}
|
||||||
|
table.dataTable thead th.dt-head-center,
|
||||||
|
table.dataTable thead td.dt-head-center,
|
||||||
|
table.dataTable tfoot th.dt-head-center,
|
||||||
|
table.dataTable tfoot td.dt-head-center {
|
||||||
|
text-align: center;
|
||||||
|
}
|
||||||
|
table.dataTable thead th.dt-head-right,
|
||||||
|
table.dataTable thead td.dt-head-right,
|
||||||
|
table.dataTable tfoot th.dt-head-right,
|
||||||
|
table.dataTable tfoot td.dt-head-right {
|
||||||
|
text-align: right;
|
||||||
|
}
|
||||||
|
table.dataTable thead th.dt-head-justify,
|
||||||
|
table.dataTable thead td.dt-head-justify,
|
||||||
|
table.dataTable tfoot th.dt-head-justify,
|
||||||
|
table.dataTable tfoot td.dt-head-justify {
|
||||||
|
text-align: justify;
|
||||||
|
}
|
||||||
|
table.dataTable thead th.dt-head-nowrap,
|
||||||
|
table.dataTable thead td.dt-head-nowrap,
|
||||||
|
table.dataTable tfoot th.dt-head-nowrap,
|
||||||
|
table.dataTable tfoot td.dt-head-nowrap {
|
||||||
|
white-space: nowrap;
|
||||||
|
}
|
||||||
|
table.dataTable tbody th.dt-body-left,
|
||||||
|
table.dataTable tbody td.dt-body-left {
|
||||||
|
text-align: left;
|
||||||
|
}
|
||||||
|
table.dataTable tbody th.dt-body-center,
|
||||||
|
table.dataTable tbody td.dt-body-center {
|
||||||
|
text-align: center;
|
||||||
|
}
|
||||||
|
table.dataTable tbody th.dt-body-right,
|
||||||
|
table.dataTable tbody td.dt-body-right {
|
||||||
|
text-align: right;
|
||||||
|
}
|
||||||
|
table.dataTable tbody th.dt-body-justify,
|
||||||
|
table.dataTable tbody td.dt-body-justify {
|
||||||
|
text-align: justify;
|
||||||
|
}
|
||||||
|
table.dataTable tbody th.dt-body-nowrap,
|
||||||
|
table.dataTable tbody td.dt-body-nowrap {
|
||||||
|
white-space: nowrap;
|
||||||
|
}
|
||||||
|
|
||||||
/*! Bootstrap 5 integration for DataTables
|
/*! Bootstrap 5 integration for DataTables
|
||||||
*
|
*
|
||||||
* ©2020 SpryMedia Ltd, all rights reserved.
|
* ©2020 SpryMedia Ltd, all rights reserved.
|
||||||
@@ -143,21 +224,21 @@ div.dataTables_scrollHead table.dataTable {
|
|||||||
margin-bottom: 0 !important;
|
margin-bottom: 0 !important;
|
||||||
}
|
}
|
||||||
|
|
||||||
div.dataTables_scrollBody table {
|
div.dataTables_scrollBody > table {
|
||||||
border-top: none;
|
border-top: none;
|
||||||
margin-top: 0 !important;
|
margin-top: 0 !important;
|
||||||
margin-bottom: 0 !important;
|
margin-bottom: 0 !important;
|
||||||
}
|
}
|
||||||
div.dataTables_scrollBody table thead .sorting:before,
|
div.dataTables_scrollBody > table > thead .sorting:before,
|
||||||
div.dataTables_scrollBody table thead .sorting_asc:before,
|
div.dataTables_scrollBody > table > thead .sorting_asc:before,
|
||||||
div.dataTables_scrollBody table thead .sorting_desc:before,
|
div.dataTables_scrollBody > table > thead .sorting_desc:before,
|
||||||
div.dataTables_scrollBody table thead .sorting:after,
|
div.dataTables_scrollBody > table > thead .sorting:after,
|
||||||
div.dataTables_scrollBody table thead .sorting_asc:after,
|
div.dataTables_scrollBody > table > thead .sorting_asc:after,
|
||||||
div.dataTables_scrollBody table thead .sorting_desc:after {
|
div.dataTables_scrollBody > table > thead .sorting_desc:after {
|
||||||
display: none;
|
display: none;
|
||||||
}
|
}
|
||||||
div.dataTables_scrollBody table tbody tr:first-child th,
|
div.dataTables_scrollBody > table > tbody tr:first-child th,
|
||||||
div.dataTables_scrollBody table tbody tr:first-child td {
|
div.dataTables_scrollBody > table > tbody tr:first-child td {
|
||||||
border-top: none;
|
border-top: none;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -235,4 +316,11 @@ div.table-responsive > div.dataTables_wrapper > div.row > div[class^=col-]:last-
|
|||||||
padding-right: 0;
|
padding-right: 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
table.dataTable.table-striped > tbody > tr:nth-of-type(2n+1) {
|
||||||
|
--bs-table-accent-bg: transparent;
|
||||||
|
}
|
||||||
|
table.dataTable.table-striped > tbody > tr.odd {
|
||||||
|
--bs-table-accent-bg: var(--bs-table-striped-bg);
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
887
src/static/scripts/datatables.js
vendored
887
src/static/scripts/datatables.js
vendored
File diff suppressed because it is too large
Load Diff
@@ -62,8 +62,8 @@
|
|||||||
headers: { "Content-Type": "application/json" }
|
headers: { "Content-Type": "application/json" }
|
||||||
}).then( resp => {
|
}).then( resp => {
|
||||||
if (resp.ok) { msg(successMsg, reload_page); return Promise.reject({error: false}); }
|
if (resp.ok) { msg(successMsg, reload_page); return Promise.reject({error: false}); }
|
||||||
respStatus = resp.status;
|
const respStatus = resp.status;
|
||||||
respStatusText = resp.statusText;
|
const respStatusText = resp.statusText;
|
||||||
return resp.text();
|
return resp.text();
|
||||||
}).then( respText => {
|
}).then( respText => {
|
||||||
try {
|
try {
|
||||||
@@ -126,9 +126,9 @@
|
|||||||
|
|
||||||
// get current URL path and assign 'active' class to the correct nav-item
|
// get current URL path and assign 'active' class to the correct nav-item
|
||||||
(() => {
|
(() => {
|
||||||
var pathname = window.location.pathname;
|
const pathname = window.location.pathname;
|
||||||
if (pathname === "") return;
|
if (pathname === "") return;
|
||||||
var navItem = document.querySelectorAll('.navbar-nav .nav-item a[href="'+pathname+'"]');
|
let navItem = document.querySelectorAll('.navbar-nav .nav-item a[href="'+pathname+'"]');
|
||||||
if (navItem.length === 1) {
|
if (navItem.length === 1) {
|
||||||
navItem[0].className = navItem[0].className + ' active';
|
navItem[0].className = navItem[0].className + ' active';
|
||||||
navItem[0].setAttribute('aria-current', 'page');
|
navItem[0].setAttribute('aria-current', 'page');
|
||||||
|
|||||||
@@ -58,7 +58,7 @@
|
|||||||
<dt class="col-sm-5">Running within Docker</dt>
|
<dt class="col-sm-5">Running within Docker</dt>
|
||||||
<dd class="col-sm-7">
|
<dd class="col-sm-7">
|
||||||
{{#if page_data.running_within_docker}}
|
{{#if page_data.running_within_docker}}
|
||||||
<span class="d-block"><b>Yes</b></span>
|
<span class="d-block"><b>Yes (Base: {{ page_data.docker_base_image }})</b></span>
|
||||||
{{/if}}
|
{{/if}}
|
||||||
{{#unless page_data.running_within_docker}}
|
{{#unless page_data.running_within_docker}}
|
||||||
<span class="d-block"><b>No</b></span>
|
<span class="d-block"><b>No</b></span>
|
||||||
@@ -150,7 +150,7 @@
|
|||||||
|
|
||||||
<dt class="col-sm-5">Domain configuration
|
<dt class="col-sm-5">Domain configuration
|
||||||
<span class="badge bg-success d-none" id="domain-success" title="The domain variable matches the browser location and seems to be configured correctly.">Match</span>
|
<span class="badge bg-success d-none" id="domain-success" title="The domain variable matches the browser location and seems to be configured correctly.">Match</span>
|
||||||
<span class="badge bg-danger d-none" id="domain-warning" title="The domain variable does not matches the browsers location.
The domain variable does not seem to be configured correctly.
Some features may not work as expected!">No Match</span>
|
<span class="badge bg-danger d-none" id="domain-warning" title="The domain variable does not match the browser location.
The domain variable does not seem to be configured correctly.
Some features may not work as expected!">No Match</span>
|
||||||
<span class="badge bg-success d-none" id="https-success" title="Configurued to use HTTPS">HTTPS</span>
|
<span class="badge bg-success d-none" id="https-success" title="Configurued to use HTTPS">HTTPS</span>
|
||||||
<span class="badge bg-danger d-none" id="https-warning" title="Not configured to use HTTPS.
Some features may not work as expected!">No HTTPS</span>
|
<span class="badge bg-danger d-none" id="https-warning" title="Not configured to use HTTPS.
Some features may not work as expected!">No HTTPS</span>
|
||||||
</dt>
|
</dt>
|
||||||
@@ -329,7 +329,7 @@
|
|||||||
|
|
||||||
supportString += "* Vaultwarden version: v{{ version }}\n";
|
supportString += "* Vaultwarden version: v{{ version }}\n";
|
||||||
supportString += "* Web-vault version: v{{ page_data.web_vault_version }}\n";
|
supportString += "* Web-vault version: v{{ page_data.web_vault_version }}\n";
|
||||||
supportString += "* Running within Docker: {{ page_data.running_within_docker }}\n";
|
supportString += "* Running within Docker: {{ page_data.running_within_docker }} (Base: {{ page_data.docker_base_image }})\n";
|
||||||
supportString += "* Environment settings overridden: ";
|
supportString += "* Environment settings overridden: ";
|
||||||
{{#if page_data.overrides}}
|
{{#if page_data.overrides}}
|
||||||
supportString += "true\n"
|
supportString += "true\n"
|
||||||
|
|||||||
@@ -37,8 +37,8 @@
|
|||||||
<span class="d-block"><strong>Size:</strong> {{attachment_size}}</span>
|
<span class="d-block"><strong>Size:</strong> {{attachment_size}}</span>
|
||||||
{{/if}}
|
{{/if}}
|
||||||
</td>
|
</td>
|
||||||
<td class="text-end pe-2 small">
|
<td class="text-end px-0 small">
|
||||||
<a class="d-block" href="#" onclick='deleteOrganization({{jsesc Id}}, {{jsesc Name}}, {{jsesc BillingEmail}})'>Delete Organization</a>
|
<button type="button" class="btn btn-sm btn-link p-0 border-0" onclick='deleteOrganization({{jsesc Id}}, {{jsesc Name}}, {{jsesc BillingEmail}})'>Delete Organization</button>
|
||||||
</td>
|
</td>
|
||||||
</tr>
|
</tr>
|
||||||
{{/each}}
|
{{/each}}
|
||||||
|
|||||||
@@ -12,9 +12,7 @@
|
|||||||
{{#each config}}
|
{{#each config}}
|
||||||
{{#if groupdoc}}
|
{{#if groupdoc}}
|
||||||
<div class="card bg-light mb-3">
|
<div class="card bg-light mb-3">
|
||||||
<div class="card-header" role="button" data-bs-toggle="collapse" data-bs-target="#g_{{group}}">
|
<button id="b_{{group}}" type="button" class="card-header text-start btn btn-link text-decoration-none" aria-expanded="false" aria-controls="g_{{group}}" data-bs-toggle="collapse" data-bs-target="#g_{{group}}">{{groupdoc}}</button>
|
||||||
<button type="button" class="btn btn-link text-decoration-none collapsed" data-bs-toggle="collapse" data-bs-target="#g_{{group}}">{{groupdoc}}</button>
|
|
||||||
</div>
|
|
||||||
<div id="g_{{group}}" class="card-body collapse">
|
<div id="g_{{group}}" class="card-body collapse">
|
||||||
{{#each elements}}
|
{{#each elements}}
|
||||||
{{#if editable}}
|
{{#if editable}}
|
||||||
@@ -61,10 +59,8 @@
|
|||||||
{{/each}}
|
{{/each}}
|
||||||
|
|
||||||
<div class="card bg-light mb-3">
|
<div class="card bg-light mb-3">
|
||||||
<div class="card-header" role="button" data-bs-toggle="collapse" data-bs-target="#g_readonly">
|
<button id="b_readonly" type="button" class="card-header text-start btn btn-link text-decoration-none" aria-expanded="false" aria-controls="g_readonly"
|
||||||
<button type="button" class="btn btn-link text-decoration-none collapsed" data-bs-toggle="collapse" data-bs-target="#g_readonly">Read-Only Config</button>
|
data-bs-toggle="collapse" data-bs-target="#g_readonly">Read-Only Config</button>
|
||||||
</div>
|
|
||||||
|
|
||||||
<div id="g_readonly" class="card-body collapse">
|
<div id="g_readonly" class="card-body collapse">
|
||||||
<div class="small mb-3">
|
<div class="small mb-3">
|
||||||
NOTE: These options can't be modified in the editor because they would require the server
|
NOTE: These options can't be modified in the editor because they would require the server
|
||||||
@@ -109,9 +105,8 @@
|
|||||||
|
|
||||||
{{#if can_backup}}
|
{{#if can_backup}}
|
||||||
<div class="card bg-light mb-3">
|
<div class="card bg-light mb-3">
|
||||||
<div class="card-header" role="button" data-bs-toggle="collapse" data-bs-target="#g_database">
|
<button id="b_database" type="button" class="card-header text-start btn btn-link text-decoration-none" aria-expanded="false" aria-controls="g_database"
|
||||||
<button type="button" class="btn btn-link text-decoration-none collapsed" data-bs-toggle="collapse" data-bs-target="#g_database">Backup Database</button>
|
data-bs-toggle="collapse" data-bs-target="#g_database">Backup Database</button>
|
||||||
</div>
|
|
||||||
<div id="g_database" class="card-body collapse">
|
<div id="g_database" class="card-body collapse">
|
||||||
<div class="small mb-3">
|
<div class="small mb-3">
|
||||||
WARNING: This function only creates a backup copy of the SQLite database.
|
WARNING: This function only creates a backup copy of the SQLite database.
|
||||||
@@ -224,11 +219,10 @@
|
|||||||
onChange(); // Trigger the event initially
|
onChange(); // Trigger the event initially
|
||||||
checkbox.addEventListener("change", onChange);
|
checkbox.addEventListener("change", onChange);
|
||||||
}
|
}
|
||||||
// These are formatted because otherwise the
|
|
||||||
// VSCode formatter breaks But they still work
|
{{#each config}} {{#if grouptoggle}}
|
||||||
// {{#each config}} {{#if grouptoggle}}
|
|
||||||
masterCheck("input_{{grouptoggle}}", "#g_{{group}} input");
|
masterCheck("input_{{grouptoggle}}", "#g_{{group}} input");
|
||||||
// {{/if}} {{/each}}
|
{{/if}} {{/each}}
|
||||||
|
|
||||||
// Two functions to help check if there were changes to the form fields
|
// Two functions to help check if there were changes to the form fields
|
||||||
// Useful for example during the smtp test to prevent people from clicking save before testing there new settings
|
// Useful for example during the smtp test to prevent people from clicking save before testing there new settings
|
||||||
|
|||||||
@@ -61,16 +61,16 @@
|
|||||||
{{/each}}
|
{{/each}}
|
||||||
</div>
|
</div>
|
||||||
</td>
|
</td>
|
||||||
<td class="text-end pe-2 small">
|
<td class="text-end px-0 small">
|
||||||
{{#if TwoFactorEnabled}}
|
{{#if TwoFactorEnabled}}
|
||||||
<a class="d-block" href="#" onclick='remove2fa({{jsesc Id}})'>Remove all 2FA</a>
|
<button type="button" class="btn btn-sm btn-link p-0 border-0" onclick='remove2fa({{jsesc Id}})'>Remove all 2FA</button>
|
||||||
{{/if}}
|
{{/if}}
|
||||||
<a class="d-block" href="#" onclick='deauthUser({{jsesc Id}})'>Deauthorize sessions</a>
|
<button type="button" class="btn btn-sm btn-link p-0 border-0" onclick='deauthUser({{jsesc Id}})'>Deauthorize sessions</button>
|
||||||
<a class="d-block" href="#" onclick='deleteUser({{jsesc Id}}, {{jsesc Email}})'>Delete User</a>
|
<button type="button" class="btn btn-sm btn-link p-0 border-0" onclick='deleteUser({{jsesc Id}}, {{jsesc Email}})'>Delete User</button>
|
||||||
{{#if user_enabled}}
|
{{#if user_enabled}}
|
||||||
<a class="d-block" href="#" onclick='disableUser({{jsesc Id}}, {{jsesc Email}})'>Disable User</a>
|
<button type="button" class="btn btn-sm btn-link p-0 border-0" onclick='disableUser({{jsesc Id}}, {{jsesc Email}})'>Disable User</button>
|
||||||
{{else}}
|
{{else}}
|
||||||
<a class="d-block" href="#" onclick='enableUser({{jsesc Id}}, {{jsesc Email}})'>Enable User</a>
|
<button type="button" class="btn btn-sm btn-link p-0 border-0" onclick='enableUser({{jsesc Id}}, {{jsesc Email}})'>Enable User</button>
|
||||||
{{/if}}
|
{{/if}}
|
||||||
</td>
|
</td>
|
||||||
</tr>
|
</tr>
|
||||||
|
|||||||
@@ -0,0 +1,8 @@
|
|||||||
|
Emergency access contact {{{grantee_email}}} accepted
|
||||||
|
<!---------------->
|
||||||
|
This email is to notify you that {{grantee_email}} has accepted your invitation to become an emergency access contact.
|
||||||
|
|
||||||
|
To confirm this user, log into the web vault ({{url}}), go to settings and confirm the user.
|
||||||
|
|
||||||
|
If you do not wish to confirm this user, you can also remove them on the same page.
|
||||||
|
{{> email/email_footer_text }}
|
||||||
@@ -0,0 +1,21 @@
|
|||||||
|
Emergency access contact {{{grantee_email}}} accepted
|
||||||
|
<!---------------->
|
||||||
|
{{> email/email_header }}
|
||||||
|
<table width="100%" cellpadding="0" cellspacing="0" style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">
|
||||||
|
<tr style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">
|
||||||
|
<td class="content-block" style="font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; margin: 0; -webkit-font-smoothing: antialiased; padding: 0 0 10px; -webkit-text-size-adjust: none;" valign="top">
|
||||||
|
This email is to notify you that {{grantee_email}} has accepted your invitation to become an emergency access contact.
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
<tr style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">
|
||||||
|
<td class="content-block" style="font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; margin: 0; -webkit-font-smoothing: antialiased; padding: 0 0 10px; -webkit-text-size-adjust: none;" valign="top">
|
||||||
|
To confirm this user, log into the <a href="{{url}}/">web vault</a>, go to settings and confirm the user.
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
<tr style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">
|
||||||
|
<td class="content-block last" style="font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; margin: 0; -webkit-font-smoothing: antialiased; padding: 0; -webkit-text-size-adjust: none;" valign="top">
|
||||||
|
If you do not wish to confirm this user, you can also remove them on the same page.
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
</table>
|
||||||
|
{{> email/email_footer }}
|
||||||
@@ -0,0 +1,6 @@
|
|||||||
|
Emergency access contact for {{{grantor_name}}} confirmed
|
||||||
|
<!---------------->
|
||||||
|
This email is to notify you that you have been confirmed as an emergency access contact for *{{grantor_name}}*.
|
||||||
|
|
||||||
|
You can now initiate emergency access requests from the web vault ({{url}}).
|
||||||
|
{{> email/email_footer_text }}
|
||||||
@@ -0,0 +1,16 @@
|
|||||||
|
Emergency access contact for {{{grantor_name}}} confirmed
|
||||||
|
<!---------------->
|
||||||
|
{{> email/email_header }}
|
||||||
|
<table width="100%" cellpadding="0" cellspacing="0" style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">
|
||||||
|
<tr style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">
|
||||||
|
<td class="content-block" style="font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; margin: 0; -webkit-font-smoothing: antialiased; padding: 0 0 10px; -webkit-text-size-adjust: none;" valign="top">
|
||||||
|
This email is to notify you that you have been confirmed as an emergency access contact for <b style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">{{grantor_name}}</b>.
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
<tr style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">
|
||||||
|
<td class="content-block last" style="font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; margin: 0; -webkit-font-smoothing: antialiased; padding: 0; -webkit-text-size-adjust: none;" valign="top">
|
||||||
|
You can now initiate emergency access requests from the <a href="{{url}}/">web vault</a>.
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
</table>
|
||||||
|
{{> email/email_footer }}
|
||||||
@@ -0,0 +1,4 @@
|
|||||||
|
Emergency access request for {{{grantor_name}}} approved
|
||||||
|
<!---------------->
|
||||||
|
{{grantor_name}} has approved your emergency access request. You may now login on the web vault ({{url}}) and access their account.
|
||||||
|
{{> email/email_footer_text }}
|
||||||
@@ -0,0 +1,11 @@
|
|||||||
|
Emergency access request for {{{grantor_name}}} approved
|
||||||
|
<!---------------->
|
||||||
|
{{> email/email_header }}
|
||||||
|
<table width="100%" cellpadding="0" cellspacing="0" style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">
|
||||||
|
<tr style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">
|
||||||
|
<td class="content-block" style="font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; margin: 0; -webkit-font-smoothing: antialiased; padding: 0 0 10px; -webkit-text-size-adjust: none;" valign="top">
|
||||||
|
<b style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">{{grantor_name}}</b> has approved your emergency access request. You may now login on the <a href="{{url}}/">web vault</a> and access their account.
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
</table>
|
||||||
|
{{> email/email_footer }}
|
||||||
@@ -0,0 +1,6 @@
|
|||||||
|
Emergency access request by {{{grantee_name}}} initiated
|
||||||
|
<!---------------->
|
||||||
|
{{grantee_name}} has initiated an emergency access request to {{atype}} your account. You may login on the web vault ({{url}}) and manually approve or reject this request.
|
||||||
|
|
||||||
|
If you do nothing, the request will automatically be approved after {{wait_time_days}} day(s).
|
||||||
|
{{> email/email_footer_text }}
|
||||||
@@ -0,0 +1,16 @@
|
|||||||
|
Emergency access request by {{{grantee_name}}} initiated
|
||||||
|
<!---------------->
|
||||||
|
{{> email/email_header }}
|
||||||
|
<table width="100%" cellpadding="0" cellspacing="0" style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">
|
||||||
|
<tr style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">
|
||||||
|
<td class="content-block" style="font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; margin: 0; -webkit-font-smoothing: antialiased; padding: 0 0 10px; -webkit-text-size-adjust: none;" valign="top">
|
||||||
|
<b style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">{{grantee_name}}</b> has initiated an emergency access request to <b style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">{{atype}}</b> your account. You may login on the <a href="{{url}}/">web vault</a> and manually approve or reject this request.
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
<tr style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">
|
||||||
|
<td class="content-block last" style="font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; margin: 0; -webkit-font-smoothing: antialiased; padding: 0; -webkit-text-size-adjust: none;" valign="top">
|
||||||
|
If you do nothing, the request will automatically be approved after <b style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">{{wait_time_days}}</b> day(s).
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
</table>
|
||||||
|
{{> email/email_footer }}
|
||||||
@@ -0,0 +1,4 @@
|
|||||||
|
Emergency access request to {{{grantor_name}}} rejected
|
||||||
|
<!---------------->
|
||||||
|
{{grantor_name}} has rejected your emergency access request.
|
||||||
|
{{> email/email_footer_text }}
|
||||||
@@ -0,0 +1,11 @@
|
|||||||
|
Emergency access request to {{{grantor_name}}} rejected
|
||||||
|
<!---------------->
|
||||||
|
{{> email/email_header }}
|
||||||
|
<table width="100%" cellpadding="0" cellspacing="0" style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">
|
||||||
|
<tr style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">
|
||||||
|
<td class="content-block" style="font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; margin: 0; -webkit-font-smoothing: antialiased; padding: 0 0 10px; -webkit-text-size-adjust: none;" valign="top">
|
||||||
|
<b style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">{{grantor_name}}</b> has rejected your emergency access request.
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
</table>
|
||||||
|
{{> email/email_footer }}
|
||||||
@@ -0,0 +1,6 @@
|
|||||||
|
Emergency access request by {{{grantee_name}}} is pending
|
||||||
|
<!---------------->
|
||||||
|
{{grantee_name}} has a pending emergency access request to {{atype}} your account. You may login on the web vault ({{url}}) and manually approve or reject this request.
|
||||||
|
|
||||||
|
If you do nothing, the request will automatically be approved after {{days_left}} day(s).
|
||||||
|
{{> email/email_footer_text }}
|
||||||
@@ -0,0 +1,16 @@
|
|||||||
|
Emergency access request by {{{grantee_name}}} is pending
|
||||||
|
<!---------------->
|
||||||
|
{{> email/email_header }}
|
||||||
|
<table width="100%" cellpadding="0" cellspacing="0" style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">
|
||||||
|
<tr style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">
|
||||||
|
<td class="content-block" style="font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; margin: 0; -webkit-font-smoothing: antialiased; padding: 0 0 10px; -webkit-text-size-adjust: none;" valign="top">
|
||||||
|
<b style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">{{grantee_name}}</b> has a pending emergency access request to <b style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">{{atype}}</b> your account. You may login on the <a href="{{url}}/">web vault</a> and manually approve or reject this request.
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
<tr style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">
|
||||||
|
<td class="content-block last" style="font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; margin: 0; -webkit-font-smoothing: antialiased; padding: 0; -webkit-text-size-adjust: none;" valign="top">
|
||||||
|
If you do nothing, the request will automatically be approved after <b style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">{{days_left}}</b> day(s).
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
</table>
|
||||||
|
{{> email/email_footer }}
|
||||||
@@ -0,0 +1,4 @@
|
|||||||
|
Emergency access request by {{{grantee_name}}} granted
|
||||||
|
<!---------------->
|
||||||
|
{{grantee_name}} has been granted emergency access to {{atype}} your account. You may login on the web vault ({{url}}) and manually revoke this request.
|
||||||
|
{{> email/email_footer_text }}
|
||||||
@@ -0,0 +1,11 @@
|
|||||||
|
Emergency access request by {{{grantee_name}}} granted
|
||||||
|
<!---------------->
|
||||||
|
{{> email/email_header }}
|
||||||
|
<table width="100%" cellpadding="0" cellspacing="0" style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">
|
||||||
|
<tr style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">
|
||||||
|
<td class="content-block" style="font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; margin: 0; -webkit-font-smoothing: antialiased; padding: 0 0 10px; -webkit-text-size-adjust: none;" valign="top">
|
||||||
|
<b style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">{{grantee_name}}</b> has been granted emergency access to <b style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">{{atype}}</b> your account. You may login on the <a href="{{url}}/">web vault</a> and manually revoke this request.
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
</table>
|
||||||
|
{{> email/email_footer }}
|
||||||
10
src/static/templates/email/incomplete_2fa_login.hbs
Normal file
10
src/static/templates/email/incomplete_2fa_login.hbs
Normal file
@@ -0,0 +1,10 @@
|
|||||||
|
Incomplete Two-Step Login From {{{device}}}
|
||||||
|
<!---------------->
|
||||||
|
Someone attempted to log into your account with the correct master password, but did not provide the correct token or action required to complete the two-step login process within {{time_limit}} minutes of the initial login attempt.
|
||||||
|
|
||||||
|
* Date: {{datetime}}
|
||||||
|
* IP Address: {{ip}}
|
||||||
|
* Device Type: {{device}}
|
||||||
|
|
||||||
|
If this was not you or someone you authorized, then you should change your master password as soon as possible, as it is likely to be compromised.
|
||||||
|
{{> email/email_footer_text }}
|
||||||
31
src/static/templates/email/incomplete_2fa_login.html.hbs
Normal file
31
src/static/templates/email/incomplete_2fa_login.html.hbs
Normal file
@@ -0,0 +1,31 @@
|
|||||||
|
Incomplete Two-Step Login From {{{device}}}
|
||||||
|
<!---------------->
|
||||||
|
{{> email/email_header }}
|
||||||
|
<table width="100%" cellpadding="0" cellspacing="0" style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">
|
||||||
|
<tr style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">
|
||||||
|
<td class="content-block" style="font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; margin: 0; -webkit-font-smoothing: antialiased; padding: 0 0 10px; -webkit-text-size-adjust: none;" valign="top">
|
||||||
|
Someone attempted to log into your account with the correct master password, but did not provide the correct token or action required to complete the two-step login process within {{time_limit}} minutes of the initial login attempt.
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
<tr style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">
|
||||||
|
<td class="content-block" style="font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; margin: 0; -webkit-font-smoothing: antialiased; padding: 0 0 10px; -webkit-text-size-adjust: none;" valign="top">
|
||||||
|
<b>Date</b>: {{datetime}}
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
<tr style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">
|
||||||
|
<td class="content-block" style="font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; margin: 0; -webkit-font-smoothing: antialiased; padding: 0 0 10px; -webkit-text-size-adjust: none;" valign="top">
|
||||||
|
<b>IP Address:</b> {{ip}}
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
<tr style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">
|
||||||
|
<td class="content-block" style="font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; margin: 0; -webkit-font-smoothing: antialiased; padding: 0 0 10px; -webkit-text-size-adjust: none;" valign="top">
|
||||||
|
<b>Device Type:</b> {{device}}
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
<tr style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">
|
||||||
|
<td class="content-block last" style="font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; margin: 0; -webkit-font-smoothing: antialiased; padding: 0; -webkit-text-size-adjust: none;" valign="top">
|
||||||
|
If this was not you or someone you authorized, then you should change your master password as soon as possible, as it is likely to be compromised.
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
</table>
|
||||||
|
{{> email/email_footer }}
|
||||||
@@ -0,0 +1,8 @@
|
|||||||
|
Emergency access for {{{grantor_name}}}
|
||||||
|
<!---------------->
|
||||||
|
You have been invited to become an emergency contact for {{grantor_name}}. To accept this invite, click the following link:
|
||||||
|
|
||||||
|
Click here to join: {{url}}/#/accept-emergency/?id={{emer_id}}&name={{grantor_name}}&email={{email}}&token={{token}}
|
||||||
|
|
||||||
|
If you do not wish to become an emergency contact for {{grantor_name}}, you can safely ignore this email.
|
||||||
|
{{> email/email_footer_text }}
|
||||||
@@ -0,0 +1,24 @@
|
|||||||
|
Emergency access for {{{grantor_name}}}
|
||||||
|
<!---------------->
|
||||||
|
{{> email/email_header }}
|
||||||
|
<table width="100%" cellpadding="0" cellspacing="0" style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">
|
||||||
|
<tr style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">
|
||||||
|
<td class="content-block" style="font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; margin: 0; -webkit-font-smoothing: antialiased; padding: 0 0 10px; -webkit-text-size-adjust: none; text-align: center;" valign="top" align="center">
|
||||||
|
You have been invited to become an emergency contact for <b style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">{{grantor_name}}</b>.
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
<tr style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">
|
||||||
|
<td class="content-block" style="font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; margin: 0; -webkit-font-smoothing: antialiased; padding: 0 0 10px; -webkit-text-size-adjust: none; text-align: center;" valign="top" align="center">
|
||||||
|
<a href="{{url}}/#/accept-emergency/?id={{emer_id}}&name={{grantor_name}}&email={{email}}&token={{token}}"
|
||||||
|
clicktracking=off target="_blank" style="color: #ffffff; text-decoration: none; text-align: center; cursor: pointer; display: inline-block; border-radius: 5px; background-color: #3c8dbc; border-color: #3c8dbc; border-style: solid; border-width: 10px 20px; margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">
|
||||||
|
Become emergency contact
|
||||||
|
</a>
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
<tr style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">
|
||||||
|
<td class="content-block last" style="font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; margin: 0; -webkit-font-smoothing: antialiased; padding: 0; -webkit-text-size-adjust: none; text-align: center;" valign="top" align="center">
|
||||||
|
If you do not wish to become an emergency contact for {{grantor_name}}, you can safely ignore this email.
|
||||||
|
</td>
|
||||||
|
</tr>
|
||||||
|
</table>
|
||||||
|
{{> email/email_footer }}
|
||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user