Computations with higher than the IEEE 754 standard double-precision (about 16 significant digits) are required recently. Although there are available software routines in Fortran and C for high-precision computation, users are required to implement such routines in their own computers with detailed knowledges about them. We have constructed an user-friendly online system for octupleprecision computation. In our Web system users with no knowledges about high-precision computation can easily perform octupleprecision computations, by choosing mathematical functions with argument(s) inputted, by writing simple mathematical expression(s) or by uploading C program(s). In this paper we enhance the Web system above by adding the facility of upload...
The results of arithmetic operations performed on digital computers are seldom exactly correct. Ther...
One can simulate low-precision floating-point arithmetic via software by executing each arithmetic o...
In recent years approximate computing has been extensively explored as a paradigm to design hardware...
This paper describes a new software package for performing arithmetic with an arbitrarily high level...
At the present time, IEEE 64-bit floating-point arithmetic is sufficiently accurate for most scienti...
At the present time, IEEE 64-bit floating-point arithmetic is sufficiently accurate for most scient...
Abstract: In basic computational physics classes, students often raise the question of how to comput...
The chief advantage of the digital computer is that it can be instructed to perform complex or repet...
We propose the first hardware implementation of standard arithmetic operators – addition, multiplica...
For many scientific calculations, particularly those involving empirical data, IEEE 32-bit floating-...
Exact computer arithmetic has a variety of uses including, but not limited to, the robust implementa...
Abstract—For many scientific calculations, particularly those involving empirical data, IEEE 32-bit ...
Low-precision floating-point arithmetic can be simulated via software by executing each arithmetic o...
Floating-point numbers have an intuitive meaning when it comes to physics-based numerical computatio...
For many years, computing systems rely on guaranteed numerical precision of each step in complex com...
The results of arithmetic operations performed on digital computers are seldom exactly correct. Ther...
One can simulate low-precision floating-point arithmetic via software by executing each arithmetic o...
In recent years approximate computing has been extensively explored as a paradigm to design hardware...
This paper describes a new software package for performing arithmetic with an arbitrarily high level...
At the present time, IEEE 64-bit floating-point arithmetic is sufficiently accurate for most scienti...
At the present time, IEEE 64-bit floating-point arithmetic is sufficiently accurate for most scient...
Abstract: In basic computational physics classes, students often raise the question of how to comput...
The chief advantage of the digital computer is that it can be instructed to perform complex or repet...
We propose the first hardware implementation of standard arithmetic operators – addition, multiplica...
For many scientific calculations, particularly those involving empirical data, IEEE 32-bit floating-...
Exact computer arithmetic has a variety of uses including, but not limited to, the robust implementa...
Abstract—For many scientific calculations, particularly those involving empirical data, IEEE 32-bit ...
Low-precision floating-point arithmetic can be simulated via software by executing each arithmetic o...
Floating-point numbers have an intuitive meaning when it comes to physics-based numerical computatio...
For many years, computing systems rely on guaranteed numerical precision of each step in complex com...
The results of arithmetic operations performed on digital computers are seldom exactly correct. Ther...
One can simulate low-precision floating-point arithmetic via software by executing each arithmetic o...
In recent years approximate computing has been extensively explored as a paradigm to design hardware...