How can I determine if my Alpha or Itanium image uses VAX floating point?
That is actually an interesting question as you can mix-and-match both kinds in the same image.
Alpha compilers are VAX float by default, but you can ask for IEEE floating (and there are several sub-styles of IEEE floating that we won't discuss here). Alpha hardware supports both VAX and IEEE.
Itanium compilers switched the default to IEEE but you can ask for VAX floating. Itanium hardware only supports IEEE, and VAX floating is performed with additional software.
Unless you use the /FLOAT qualifier (/REAL_SIZE for BASIC), you probably don't care as any prior migration from Alpha to Itanium would have exposed any dependency. Only applications that used floating binary data files really cared about using the same bit representation. And for those customers, we have some general purpose conversion routines (CVT$CONVERT_FLOAT) to assist customers today with on-disk conversions of binary floating data.
The LINKER MAP files on Itanium show what kind of floating point is used in each object module. The compiler recorded it in each object module. That does not exist on Alpha.
For V9.2 customers, there are still some open issues surrounding LIB$WAIT, CVT$CONVERT_FLOAT, and LIB$CVTF_TO/FROM_INTERNAL_TIME. Future implementation of the VAX floating support will restore their old behavior, but conversion today to IEEE floating on Alpha/Itanium will avoid the issue and improve performance.
How are frame pointers used in the x86 Calling Standard?
In the x86 world, there are two major calling standards. The one that Windows uses and everybody else. OpenVMS uses the "everybody else" model (i.e. the AMD64 model that is used by Linux with a few small upwards-compatible additions documented in the OpenVMS Calling Standard).
In the Linux x86 model, the frame pointer register (much like the VAX FP and Alpha FP) is used by the compiler to access stack-local storage. Such details are usually invisible to the vast majority of code.
On Linux, for small leaf routines, the compilers/linker can omit setting up the frame pointer as an optimization. That is the same concept as JSB routines on VAX and null-frame routines on Alpha and Itanium. At present on x86, we always generate frame pointers. They cost almost nothing to create and make the exception handling code easier to implement. In the future, I suspect we will always create frame pointers just like we'll always create PIC (Position Independent Code) code.
My Pascal app uses 32-bit pointers, do I need to change the app to use 64-bit pointers?
Since the stack and static data on x86 is still allocated in 32-bit address space, a 32-bit pointer is sufficient to point to them.
With the discussion of moving code to 64-bit space, the question of "what size pointer do I need to point to a routine?" comes up. In the Linux world, you DO need a 64-bit pointer to point to 64-bit code. However, given the "recompile and go" approach we are taking with OpenVMS (and the tons of legacy code that might have only allocated a 32-bit variable to hold a procedure value), we have decided to implement linker-generated trampoline routines that continue to live in 32-bit address space. These trampoline routines can be pointed to by 32-bit pointers and are similar in concept to the Alpha Procedure Descriptor and Itanium Function Descriptor data structures. Code that takes the address of routine will get a 32-bit pointer value to one of these linker-generated trampoline routines.
How do I migrate a COBOL program from VAX to an OpenVMS x86 virtual machine?
Aside from the question about binary translators, what about moving code all the way from VAX to x86?
It is exactly the same as moving code from VAX to Alpha years ago. For most of the compilers, most of the programs are "recompile and go". COBOL does have some minor differences that are listed in an appendix in the COBOL User Manual. Some VAX languages needed many changes (all those ".CALL_ENTRY" directives added to ALL Macro-32 source files for example).
The VAX to Alpha transition was influenced by the fact that most of our VAX compilers had their own code generators while Alpha and later compilers were re-engineered to use the GEM common code generator.
If code has already been ported to either Alpha or Itanium, we expect most programs to "recompile and go".
C++ will be an exception as the number of OpenVMS-isms that we'll put into Clang will be a yet-to-be-determined subset of those in our Itanium C++ compiler. We did the same kind of "reduction" moving from Alpha C++ to Itanium C++. We'll use a "customer-driven" approach instead of just doing everything we did before.
Can I get a copy of the Calling Standard?
Yes. Please visit this page.
Will there be LT: devices?
Yes, LT: devices will be supported. In general, we plan to have all currently supported network options available for OpenVMS x86-64.
Do all the lexical functions exist on OpenVMS x86?
All lexical functions provided by OpenVMS on Alpha and Itanium will be available on OpenVMS x86-64, and lexical functions that return architecture-specific information will be updated to return the correct information for the x86-64 platform.
Can OpenVMS V9.2 on x86 be a container in Docker?
As of OpenVMS V9.2, this has not been investigated yet. As our x86 support evolves, we will look into this and other deployment options.
Can an OpenVMS VM instance access physical tape devices and other SAN-based storage?
We will publish a paper on access to storage devices from the VMs we support.
Will you support Hyper-V?
Yes, we plan to support Hyper-V in later releases of OpenVMS on x86.
Will you have documentation on device drivers and memory management?
Existing documentation will be updated for x86-64 and new documentation will be created where necessary. This will include documentation covering important areas, such as device drivers and memory management.
Can I cluster VAX 7.3 with OpenVMS x86?
Clustering of VSI OpenVMS for x86-64 with VAX/VMS systems is still under investigation at this time.
Will mixed-architecture clusters be supported?
Mixed-architecture clusters of Alpha, Itanium, x86-64 systems are supported for VSI OpenVMS.
What is the maximum number of CPUs supported by OpenVMS x86?
The maximum number of CPUs (physical or virtual) supported by OpenVMS x86-64 V9.2 is 32. Supporting more than 32 CPUs may be implemented in a future release.
What is the maximum volume capacity on V9.2?
The maximum volume capacity on V9.2 is the same as for Alpha and Itanium today (2 TB).
When will you have Clusters and Volume Shadowing on x86?
Clustering and volume shadowing are available as of OpenVMS V9.2.
Do you have performance data comparing Alpha, Itanium, and x86?
We have done no performance work yet. Performance analysis will start once we have all the system components in place for V9.2 and we have native optimizing compilers. We will publish the results of our work.
Will RMS record size be expanded beyond 32K?
There are no plans to expand the maximum RMS record size. The current focus is on completing the port of the existing system to x86-64 with as little non-essential change as possible. Once this goal is achieved, we can consider various enhancements.
Will VSI support new graphics cards on x86?
No new graphics cards will be supported in OpenVMS x86-64 V9.2. It is possible that additional cards could be considered in the future.
Will you have binary translators from VAX to x86, Alpha to x86, Itanium to x86?
We created a prototype binary translator for Alpha to x86-64, however there are no plans at this time to develop this further. Any additional work will be evaluated based on customer requirements.
Will hypervisor tools used to manage VM guests work with OpenVMS x86?
We have started investigating hypervisor interfaces and their management tools, for example VMware Tools, but any implementations will be in released after V9.2.
Will VMware vMotion work?
We have started investigating that, but any implementations will be in released after V9.2.
Is VSI working with universities to introduce OpenVMS in college curriculums?
We are revising possibilities for engaging with the higher education after OpenVMS V9.2 is released.
Code and Data Memory Placement in OpenVMS
For the port to x86-64, we made the decision to move the code out of the 32-bit address space as a way to make room for more static data and heap storage. It is a partial solution for these traditional 32-bit programs that are running out of room. It isn't a perfect solution as it does not address the size of the stack, but it is a good solution for several applications that were unwilling/unable to modify their programs to use 64-bit heap storage.
One additional question that now comes up is "If the code is now in 64-bit space, do I need a 64-bit pointer to point to it?" On Alpha and Itanium, function values are pointers to "descriptors" and not the code. The code address is one of the fields in those descriptors and has always been a 64-bit pointer. These descriptors are allocated in 32-bit memory, so they can be pointed to with a 32-bit pointer. However, on x86, function values are indeed pointers to code. It is expected that you can just do an indirect call with a function value. Since there are assumptions that a function value is only 32 bits in size, we could not simply change function values to be 64-bit pointers without causing source code changes. To solve this problem, the linker creates small trampoline routines that are allocated in the 32-bit P0 address range which then transfer control to the actual code that resides in 64-bit space.