-
Notifications
You must be signed in to change notification settings - Fork 0
docs missing on the question of precision #7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Thanks for reaching out. In principle, singularities should not pose a problem, since the algorithm is computing critical points of What is the number of hypersurfaces and how many variables are we talking about? The implementation uses 64-bit machine precision by using methods from Can you share a bit more details on your example, maybe I can also take a look at it. |
The background of the computations we'd like to carry out is geometric group theory, more precisely, computing a fundamental domain of the Hilbert modular group of a real quadratic extension The hypersurfaces are mostly affine of degree 4 (a particular rational type of them, although as far as I understand this is of no direct help for your solver). (there are also few of them of smaller degree - in particular e.g.
This particular fragment features a singularity - a common root at the centre of the ball of squared radius 3/5 inside the sphere given by the equation
The full input is typically much longer - however we're programming a subdivision-based algorithm allowing to "zoom in" on subsets with (hopefully) relatively few hypersurfaces (tens or hundreds). We are only interested in the cell where all the functions (specified by the hypersurface equations) are positive, and the adjacent cells (where just one function is negative). It is crucial to us, as we cannot lose any topological information due to numerical issues (the topological information is needed for computing a presentation of the group, which is the real problem we're after), does the solver detect such issues and throws an error? And related question - is there a way to get a correctness certificate for the computation (something that's often avaliable in convex optimisation)? Last but not the least - I presume that an incremental (by adding more hyperplanes) refining of the arrangement isn't feasible with this approach, is it true? |
one can imagine that at least after the critical points are found, only the ones within the desired region are taken for further processing, and the rest ignored. |
The algorithm works in two stages. In the first stage, the critical points of
There are a lot of built-in heuristics to detect numerical issues. I would trust the numerical computation, but of course it is not a certified computation.
The answer to this question is subtle. It is possible to certify the computation of each critical point individually (although, this is currently not implemented as an option here). However, there is no method (not even theoretically) to certify whether we have found all critical points.
This is true, because adding more hyperplanes will change the critical points, so the previous computation is useless.
See my answer above. I hope this clarifies further what is happening. Don't hesitate to send more questions. |
We're trying to use the package for computing fundamental domains of a discrete subgroup of$SL_4(\mathbb{R})$ , and wonder about the precision issues. There will be singularities (many more hypersurfaces, say, 10 or 15, intersecting in a point, than in the generic case, i.e. 4).
Can such a case be dealt with directly?
Does it use the machine precision? Or something better? The docs are silent on this.
@Ilia-Smilga
The text was updated successfully, but these errors were encountered: