Skip to content

Commit

Permalink
added comparison to fplll and Nemo (FLINT) in README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
chrisvwx committed Mar 14, 2020
1 parent 8efd091 commit 69c7cac
Show file tree
Hide file tree
Showing 18 changed files with 613 additions and 123 deletions.
2 changes: 1 addition & 1 deletion Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ name = "LLLplus"
uuid = "142c1900-a1c3-58ae-a66d-b187f9ca6423"
keywords = ["lattice reduction", "lattice basis reduction", "shortest vector problem", "closest vector problem", "LLL", "Lenstra-Lenstra-Lovász", "Seysen", "Brun", "VBLAST", "subset-sum problem","Lagarias-Odlyzko","Bailey–Borwein–Plouffe formula"]
license = "MIT"
version = "1.2.6"
version = "1.2.7"

[deps]
DelimitedFiles = "8bb1440f-4735-579b-a4ab-409b98df4dab"
Expand Down
98 changes: 58 additions & 40 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,10 +5,10 @@

LLLplus includes
[Lenstra-Lenstra-Lovász](https://en.wikipedia.org/wiki/Lenstra%E2%80%93Lenstra%E2%80%93Lov%C3%A1sz_lattice_basis_reduction_algorithm)
(LLL), [Brun](https://en.wikipedia.org/wiki/Viggo_Brun), and Seysen lattice reduction; VBLAST matrix
decomposition; and a
(LLL), [Brun](https://en.wikipedia.org/wiki/Viggo_Brun), and Seysen lattice reduction; and [shortest vector problem](https://en.wikipedia.org/wiki/Lattice_problem#Shortest_vector_problem_.28SVP.29)
(SVP) and
[closest vector problem](https://en.wikipedia.org/wiki/Lattice_problem#Closest_vector_problem_.28CVP.29)
(CVP) solver. These lattice reduction and related lattice tools are
(CVP) solvers. These lattice reduction and related lattice tools are
used in cryptography, digital communication, and integer programming.
The historical and practical prominence of the LLL technique in
lattice tools is the reason for its use in the name "LLLplus".
Expand All @@ -18,12 +18,12 @@ This package is experimental; see
LLL [1] lattice reduction is a powerful tool that is widely used in
cryptanalysis, in cryptographic system design, in digital
communications, and to solve other integer problems. LLL reduction is
often used as an approximate solution to the
[shortest vector problem](https://en.wikipedia.org/wiki/Lattice_problem#Shortest_vector_problem_.28SVP.29)
(SVP). We also include Gauss/Lagrange, Brun [2] and Seysen [3]
often used as an approximate solution to the SVP.
We also include Gauss/Lagrange, Brun [2] and Seysen [3]
lattice reduction techniques. The LLL, Brun, and Seysen algorithms are
based on [4]. The CVP solver is based on [5] and can handle lattices
and bounded integer constellations.
and bounded integer constellations. A slow SVP solver based on the CVP
tool is included as well.

We also include code to do a
[Vertical-Bell Laboratories Layered Space-Time](https://en.wikipedia.org/wiki/Bell_Laboratories_Layered_Space-Time)
Expand All @@ -35,19 +35,20 @@ demostrates how these functions can be used in encoding and decoding
multi-antenna signals.

Another important application is in cryptanalysis; as an example of a
cryptanalytic attack, see the `subsetsum` function. Another important
application is in integer programming, where the LLL algorithm has
cryptanalytic attack, see the `subsetsum` function. The LLL algorithm has
been shown to solve the integer programming feasibility problem; see
`integerfeasibility`. Lattice tools are often used to study and solve
Diophantine problems; for example in "simultaneous diophantine
approximation" a vector of real numbers are approximated by rationals
with a common deonminator. For a demo function, see `rationalapprox`.
Finally, to see how the LLL can be used to find spigot formulas for
irrationals, see `spigotBBP`.

### Examples

Each function contains documentation and examples available via Julia's
built-in documentation system, for example with `?lll`. Documentation
of all the functions are available on
for all functions is available on
[pkg.julialang.org](https://pkg.julialang.org/docs/LLLplus/). A tutorial notebook is
found in the [`docs`](docs/LLLplusTutorial.ipynb) directory or on
[nbviewer](https://nbviewer.jupyter.org/github/christianpeel/LLLplus.jl/blob/master/docs/LLLplusTutorial.ipynb).
Expand Down Expand Up @@ -79,44 +80,61 @@ sum(abs.(u-uhat))

### Execution Time results

The following performance results are obtained from the
following command in the top-level LLLplus directory:
`julia -e 'include("benchmark/perftest.jl")'`
In the tests we time execution of the lattice-reduction functions,
average the results over multiple random matrices, and show results as
a function of the size of the matrix and of the data type.

We first show how the time varies with matrix size (1,2,4,...256); the
vertical axis shows execution time on a logarithmic scale; the x-axis
is also logarithmic. The generally linear nature of the LLL curve supports
the polynomial-time nature of the algorithm. Each data point
is the average of execution time of 40 runs of a lattice-reduction
technique, where the matrices used were generated using 'randn' to
emulate unit-variance Gaussian-distributed values.

![Time vs matrix size](docs/src/assets/perfVsNfloat64.png)

All the modules can handle a variety of data types. In the next figure
we show execution time for several built-in datatypes (Int32, Int64,
Int128, Float32, Float64, BitInt, and BigFloat) as well as type from
external packages (Float128 from Quadmath.jl and Double64 from
DoubleFloat.jl) which are used to generate 40 128x128 matrices, over
which execution time for the lattice reduction techniques is averaged.
The vertical axis is a logarithmic representation of execution time as
in the previous figure.
In the first test we compare the `lll` function from LLLplus, the
`l2avx` function in the `src\l2.jl` file in LLLplus, the
`lll_with_transform` function from Nemo (which uses FLINT), and the
`lll_reduction` function from fplll. Nemo and fplll are written by
number theorists and are good benchmarks against which to compare. We
first show how the execution time varies as the basis (matrix) size
varies over [4 8 16 32 64]. For each matrix size, 20 random bases
are generated using fplll's `gen_qary` function with depth of 25
bits, with the average execution time shown; the `eltype` is `Int64`
except for NEMO, which uses GMP (its own `BigInt`); in all cases the
`δ=.99`. The vertical axis shows
execution time on a logarithmic scale; the x-axis is also
logarithmic. The generally linear nature of the LLL curves supports
the polynomial-time nature of the algorithm. The `LLLplus.lll`
function is slower, while `l2avx` is similar to fplll. Though not
shown, using bases from `gen_qary` with bit depth of 45 gives fplll
a larger advantage. This figure was generated using code in
`test/timeLLLs.jl`.

![Time vs basis size](docs/src/assets/timeVdim_25bitsInt64.png)

One question that could arise when looking at the plot above is what
the quality of the basis is. In the next plot we show execution time
vs the norm of the first vector in the reduced basis, this first
vector is typically the smallest; its norm is an rough indication of
the quality of the reduced basis. We show results averaged over 20
random bases from `gen_qary` with depth `25` bits, this time with the
dimension fixed at `32`. The curve is created by varying the `δ`
parameter from `.29` to `.99` in steps of `.2`; the larger times and
smaller norms correspond to the largest `δ` values. Though the `l2avx`
function is competitive with fplll in this case, in many other cases
the fplll code is significantly faster.

![Time vs reduction quality](docs/src/assets/timeVsmallest_25bitsInt64.png)

Finally, we show execution time for several built-in
datatypes (Int32, Int64, Int128, Float32, Float64, BitInt, and
BigFloat) as well as type from external packages (Float128 from
Quadmath.jl and Double64 from DoubleFloat.jl) which are used to
generate 40 128x128 matrices, over which execution time for the
lattice reduction techniques is averaged. The vertical axis is a
logarithmic representation of execution time as in the previous
figure. This figure was generated using code in `test/perftest.jl`.

![Time vs data type](docs/src/assets/perfVsDataType.png)

### Notes

There are certainly many improvements and additions that could be made
to LLLplus, such as adding Block-Korkin-Zolotarev lattice reduction
to LLLplus, such as adding Block-Korkin-Zolotarev (BKZ) lattice reduction
with improvements as in [8]. Even so, it would be hard to compete with
[fplll](https://github.com/fplll/fplll) on features. In fact, a Julia
wrapper around the [fplll](https://github.com/fplll/fplll) or
[Number Theory Library](http://www.shoup.net/ntl/) would be the most
useful addition to lattice tools in Julia. These respected tools could
be used directly and would provide funcionality not in LLLplus.
wrapper around [fplll](https://github.com/fplll/fplll) would be the most
useful addition to lattice tools in Julia; it would
provide funcionality not in LLLplus such as BKZ reduction.

The algorithm pseudocode in the monograph [7] and the survey paper [4]
were very helpful in writing the lattice reduction tools in LLLplus
Expand Down
Binary file removed benchmark/perfVsDataTypeN16.png
Binary file not shown.
Binary file modified docs/src/assets/perfVsDataType.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/src/assets/timeVdim_25bitsInt64.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/src/assets/timeVsmallest_25bitsInt64.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
1 change: 0 additions & 1 deletion docs/src/functions.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,6 @@ end
```
```@docs
lll
l2
cvp
svp
brun
Expand Down
98 changes: 58 additions & 40 deletions docs/src/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,10 +6,10 @@ CurrentModule = LLLplus

LLLplus includes
[Lenstra-Lenstra-Lovász](https://en.wikipedia.org/wiki/Lenstra%E2%80%93Lenstra%E2%80%93Lov%C3%A1sz_lattice_basis_reduction_algorithm)
(LLL), [Brun](https://en.wikipedia.org/wiki/Viggo_Brun), and Seysen lattice reduction; VBLAST matrix
decomposition; and a
(LLL), [Brun](https://en.wikipedia.org/wiki/Viggo_Brun), and Seysen lattice reduction; and [shortest vector problem](https://en.wikipedia.org/wiki/Lattice_problem#Shortest_vector_problem_.28SVP.29)
(SVP) and
[closest vector problem](https://en.wikipedia.org/wiki/Lattice_problem#Closest_vector_problem_.28CVP.29)
(CVP) solver. These lattice reduction and related lattice tools are
(CVP) solvers. These lattice reduction and related lattice tools are
used in cryptography, digital communication, and integer programming.
The historical and practical prominence of the LLL technique in
lattice tools is the reason for its use in the name "LLLplus".
Expand All @@ -19,12 +19,12 @@ This package is experimental; see
LLL [1] lattice reduction is a powerful tool that is widely used in
cryptanalysis, in cryptographic system design, in digital
communications, and to solve other integer problems. LLL reduction is
often used as an approximate solution to the
[shortest vector problem](https://en.wikipedia.org/wiki/Lattice_problem#Shortest_vector_problem_.28SVP.29)
(SVP). We also include Gauss/Lagrange, Brun [2] and Seysen [3]
often used as an approximate solution to the SVP.
We also include Gauss/Lagrange, Brun [2] and Seysen [3]
lattice reduction techniques. The LLL, Brun, and Seysen algorithms are
based on [4]. The CVP solver is based on [5] and can handle lattices
and bounded integer constellations.
and bounded integer constellations. A slow SVP solver based on the CVP
tool is included as well.

We also include code to do a
[Vertical-Bell Laboratories Layered Space-Time](https://en.wikipedia.org/wiki/Bell_Laboratories_Layered_Space-Time)
Expand All @@ -36,19 +36,20 @@ demostrates how these functions can be used in encoding and decoding
multi-antenna signals.

Another important application is in cryptanalysis; as an example of a
cryptanalytic attack, see the `subsetsum` function. Another important
application is in integer programming, where the LLL algorithm has
cryptanalytic attack, see the `subsetsum` function. The LLL algorithm has
been shown to solve the integer programming feasibility problem; see
`integerfeasibility`. Lattice tools are often used to study and solve
Diophantine problems; for example in "simultaneous diophantine
approximation" a vector of real numbers are approximated by rationals
with a common deonminator. For a demo function, see `rationalapprox`.
Finally, to see how the LLL can be used to find spigot formulas for
irrationals, see `spigotBBP`.

### Examples

Each function contains documentation and examples available via Julia's
built-in documentation system, for example with `?lll`. Documentation
of all the functions are available on
for all functions is available on
[pkg.julialang.org](https://pkg.julialang.org/docs/LLLplus/). A tutorial notebook is
found in the `docs` directory or on
[nbviewer](https://nbviewer.jupyter.org/github/christianpeel/LLLplus.jl/blob/master/docs/LLLplusTutorial.ipynb).
Expand Down Expand Up @@ -80,44 +81,61 @@ sum(abs.(u-uhat))

### Execution Time results

The following performance results are obtained from the
following command in the top-level LLLplus directory:
`julia -e 'include("benchmark/perftest.jl")'`
In the tests we time execution of the lattice-reduction functions,
average the results over multiple random matrices, and show results as
a function of the size of the matrix and of the data type.

We first show how the time varies with matrix size (1,2,4,...256); the
vertical axis shows execution time on a logarithmic scale; the x-axis
is also logarithmic. The generally linear nature of the LLL curve supports
the polynomial-time nature of the algorithm. Each data point
is the average of execution time of 40 runs of a lattice-reduction
technique, where the matrices used were generated using 'randn' to
emulate unit-variance Gaussian-distributed values.

![Time vs matrix size](assets/perfVsNfloat64.png)

All the modules can handle a variety of data types. In the next figure
we show execution time for several built-in datatypes (Int32, Int64,
Int128, Float32, Float64, BitInt, and BigFloat) as well as type from
external packages (Float128 from Quadmath.jl and Double64 from
DoubleFloat.jl) which are used to generate 40 128x128 matrices, over
which execution time for the lattice reduction techniques is averaged.
The vertical axis is a logarithmic representation of execution time as
in the previous figure.
In the first test we compare the `lll` function from LLLplus, the
`l2avx` function in the `src\l2.jl` file in LLLplus, the
`lll_with_transform` function from Nemo (which uses FLINT), and the
`lll_reduction` function from fplll. Nemo and fplll are written by
number theorists and are good benchmarks against which to compare. We
first show how the execution time varies as the basis (matrix) size
varies over [4 8 16 32 64]. For each matrix size, 20 random bases
are generated using fplll's `gen_qary` function with depth of 25
bits, with the average execution time shown; the `eltype` is `Int64`
except for NEMO, which uses GMP (its own `BigInt`); in all cases the
`δ=.99`. The vertical axis shows
execution time on a logarithmic scale; the x-axis is also
logarithmic. The generally linear nature of the LLL curves supports
the polynomial-time nature of the algorithm. The `LLLplus.lll`
function is slower, while `l2avx` is similar to fplll. Though not
shown, using bases from `gen_qary` with bit depth of 45 gives fplll
a larger advantage. This figure was generated using code in
`test/timeLLLs.jl`.

![Time vs basis size](assets/timeVdim_25bitsInt64.png)

One question that could arise when looking at the plot above is what
the quality of the basis is. In the next plot we show execution time
vs the norm of the first vector in the reduced basis, this first
vector is typically the smallest; its norm is an rough indication of
the quality of the reduced basis. We show results averaged over 20
random bases from `gen_qary` with depth `25` bits, this time with the
dimension fixed at `32`. The curve is created by varying the `δ`
parameter from `.29` to `.99` in steps of `.2`; the larger times and
smaller norms correspond to the largest `δ` values. Though the `l2avx`
function is competitive with fplll in this case, in many other cases
the fplll code is significantly faster.

![Time vs reduction quality](assets/timeVsmallest_25bitsInt64.png)

Finally, we show execution time for several built-in
datatypes (Int32, Int64, Int128, Float32, Float64, BitInt, and
BigFloat) as well as type from external packages (Float128 from
Quadmath.jl and Double64 from DoubleFloat.jl) which are used to
generate 40 128x128 matrices, over which execution time for the
lattice reduction techniques is averaged. The vertical axis is a
logarithmic representation of execution time as in the previous
figure. This figure was generated using code in `test/perftest.jl`.

![Time vs data type](assets/perfVsDataType.png)

### Notes

There are certainly many improvements and additions that could be made
to LLLplus, such as adding Block-Korkin-Zolotarev lattice reduction
to LLLplus, such as adding Block-Korkin-Zolotarev (BKZ) lattice reduction
with improvements as in [8]. Even so, it would be hard to compete with
[fplll](https://github.com/fplll/fplll) on features. In fact, a Julia
wrapper around the [fplll](https://github.com/fplll/fplll) or
[Number Theory Library](http://www.shoup.net/ntl/) would be the most
useful addition to lattice tools in Julia. These respected tools could
be used directly and would provide funcionality not in LLLplus.
wrapper around [fplll](https://github.com/fplll/fplll) would be the most
useful addition to lattice tools in Julia; it would
provide funcionality not in LLLplus such as BKZ reduction.

The algorithm pseudocode in the monograph [7] and the survey paper [4]
were very helpful in writing the lattice reduction tools in LLLplus
Expand Down
3 changes: 1 addition & 2 deletions src/LLLplus.jl
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ using LinearAlgebra
using Printf

export
lll,l2,
lll,
cvp,
svp,
brun,
Expand All @@ -27,7 +27,6 @@ export
gen_qary_b,gen_qary!,
dataTypeForGram,intTypeGivenBitsRequired

include("l2.jl") # l2 (cholseky) variant of LLL
include("lll.jl") # lll, gauss, sizereduction
include("cvp.jl") # cvp, svp
include("brun.jl")
Expand Down
16 changes: 10 additions & 6 deletions src/applications.jl
Original file line number Diff line number Diff line change
Expand Up @@ -416,13 +416,16 @@ If a formula is found, it is printed to the screen in LaTeX and the
coefficents `a` are returned as a vector. An online LaTeX viewer
such as https://www.latex4technics.com/ may be helpful.
This is not a robust tool, just a demo. For example, there may be a problem
with s≥2.
This is not a robust tool, just a demo. For example, there may be a
problem with s≥2. See [2] for derivation of the technique used, and to
check whether a formula you find is new.
[1] Bailey, David, Peter Borwein, and Simon Plouffe. "On the rapid
[1] David Bailey, Peter Borwein, and Simon Plouffe. "On the rapid
computation of various polylogarithmic constants." Mathematics of
Computation 66.218 (1997): 903-913.
https://www.ams.org/journals/mcom/1997-66-218/S0025-5718-97-00856-9/
[2] David Bailey, "A Compendium of BBP-Type Formulas for Mathematical
Constants". https://www.davidhbailey.com//dhbpapers/bbp-formulas.pdf
# Example
```jldoctest
Expand All @@ -441,7 +444,8 @@ spigotBBP(8*sqrt(2)*log(1+sqrt(2)),1,16,8,25,true);
```
There is a formula for pi^2 which the following command should find, but it
does not find it. It's no obvious what the problem is
does not find it. In fact the technique doesn't seem to work at all for
s>2; It's not obvious what the problem is
```julia
spigotBBP(BigFloat(pi)*pi,2,64,6,25,true);
```
Expand All @@ -467,10 +471,10 @@ function spigotBBP(α::Td,s,b,n,K,verbose=false) where {Td}
end
end
end
@printf("\\right)")
@printf("\\right)\n")
end
if ismissing(av[1])
verbose && @printf("A solution was found not found.\n")
verbose && @printf("A solution was not found.\n")
return missing
else
return av
Expand Down
Loading

2 comments on commit 69c7cac

@chrisvwx
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@JuliaRegistrator
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Registration pull request created: JuliaRegistries/General/10939

After the above pull request is merged, it is recommended that a tag is created on this repository for the registered package version.

This will be done automatically if Julia TagBot is installed, or can be done manually through the github interface, or via:

git tag -a v1.2.7 -m "<description of version>" 69c7cac2a75c61f2ce5792cf112f5e2328c22216
git push origin v1.2.7

Please sign in to comment.