X86 Port: Frequently Asked Questions

Actually, you can mix-and-match both kinds in the same image.

Alpha compilers are VAX format by default, but you can ask for IEEE format (and there are several substyles of IEEE format that we won't discuss here). Alpha hardware supports both VAX and IEEE formats.

Itanium compilers switched the default to IEEE format, but you can ask for VAX format. Itanium hardware only supports IEEE format, and VAX format is performed with additional software and some performance overhead.

x86 compilers also default to IEEE format, but you can ask for VAX float. Like Itanium, x86-64 CPUs only support IEEE format, and VAX format is performed with additional software and some performance overhead.

Unless you use the /FLOAT qualifier (/REAL_SIZE for BASIC), you probably do not have floating format sensitive code as any prior migration from Alpha to Itanium would have exposed any dependency. Only applications that used floating binary data files really cared about using the same bit representation. For those customers, we have some general purpose conversion routines (CVT$CONVERT_FLOAT) to assist them today with on-disk conversions of binary floating data.

The LINKER MAP files on Itanium show what kind of floating point is used in each object module. The compiler recorded it in each object module. That feature does not exist on Alpha. The x86 compilers and linker have latent support for this feature, but it is not fully implemented as of OpenVMS V9.2-1.

In the x86 world, there are two major calling standards, i.e. the one that Windows uses and the one that is used by everybody else. OpenVMS uses the "everybody else" model (i.e. the AMD64 model that is used by Linux with a few small upwards-compatible additions documented in the OpenVMS Calling Standard).

In the Linux x86 model, the frame pointer register (much like the VAX FP and Alpha FP) is used by the compiler to access stack-local storage. Such details are usually invisible to the vast majority of code.

On Linux, for small leaf routines, the compilers/linker can omit setting up the frame pointer as an optimization. That is the same concept as JSB routines on VAX and null-frame routines on Alpha and Itanium. On x86, we always generate frame pointers. They cost almost nothing to create and make the exception handling code easier to implement.

Since the stack and static data on x86 is still allocated in 32-bit address space, a 32-bit pointer is sufficient to point to them.

With the discussion of moving code to 64-bit space, the question of "what size pointer do I need to point to a routine?" comes up. In the Linux world, you do need a 64-bit pointer to point to 64-bit code. However, given the "recompile and go" approach we are taking with OpenVMS (and the tons of legacy code that might have only allocated a 32-bit variable to hold a procedure value), we have decided to implement linker-generated trampoline routines that continue to live in 32-bit address space. These trampoline routines can be pointed to by 32-bit pointers and are similar in concept to the Alpha Procedure Descriptor and Itanium Function Descriptor data structures. Code that takes the address of the routine will get a 32-bit pointer value to one of these linker-generated trampoline routines.

For the port to x86-64, we made the decision to move the code out of the 32-bit address space as a way to make room for more static data and heap storage. It is a partial solution for these traditional 32-bit programs that are running out of room. It isn't a perfect solution as it does not address the size of the stack, but it is a good solution for several applications that were unwilling/unable to modify their programs to use 64-bit heap storage.

One additional question that now comes up is "If the code is now in 64-bit space, do I need a 64-bit pointer to point to it?". On Alpha and Itanium, function values are pointers to "descriptors" and not the code. The code address is one of the fields in those descriptors and has always been a 64-bit pointer. These descriptors are allocated in 32-bit memory, so they can be pointed to with a 32-bit pointer. However, on x86, function values are indeed pointers to code. It is expected that you can just do an indirect call with a function value. Since there are assumptions that a function value is only 32 bits in size, we could not simply change function values to be 64-bit pointers without causing source code changes. To solve this problem, the linker creates small trampoline routines that are allocated in the 32-bit P0 address range which then transfer control to the actual code that resides in 64-bit space.

It is exactly the same as moving code from VAX to Alpha years ago. For most of the compilers, most of the programs are "recompile and go". COBOL does have some minor differences that are listed in an appendix in the COBOL User Manual. Some VAX languages needed many changes (all those ".CALL_ENTRY" directives added to ALL Macro-32 source files for example).

If code has already been ported to either Alpha or Itanium, we expect most programs to "recompile and go".

C++ will be an exception as the number of OpenVMS-isms that we will put into Clang will be a subset of those in our Itanium C++ compiler. We did the same kind of "reduction" moving from Alpha C++ to Itanium C++. We will use a "customer-driven" approach instead of just doing everything we did before.

Yes. Please visit this page.

Yes, LT: devices will be supported. In general, we plan to have all currently supported network options available for OpenVMS x86-64.

All lexical functions provided by OpenVMS on Alpha and Itanium will be available on OpenVMS x86-64, and lexical functions that return architecture-specific information will be updated to return the correct information for the x86-64 platform.

As of OpenVMS V9.2-1, this has not been investigated yet. As our x86 support evolves, we will look into this and other deployment options.

We will publish a paper on access to storage devices from the VMs we support.

We are experimenting with Hyper-V but as yet have no planned support date.

Existing documentation will be updated for x86-64 and new documentation will be created where necessary. This will include documentation covering important areas, such as device drivers and memory management.

No, clustering of VSI OpenVMS for x86-64 with VAX/VMS systems will not be officially supported.

Mixed-architecture clusters of Alpha, Itanium, and x86-64 systems are supported for VSI OpenVMS.

The maximum number of CPUs (physical or virtual) supported by OpenVMS x86-64 V9.2-1 is 32. Supporting more than 32 CPUs may be implemented in a future release.

The maximum volume capacity on V9.2-1 is the same as for Alpha and Itanium today (2 TB).

Clustering and volume shadowing are available as of OpenVMS V9.2 and later.

We have done no performance work yet. Performance analysis will start once we have all the system components in place and have native optimizing compilers. We will publish the results of our work.

There are no plans to expand the maximum RMS record size. The current focus is on completing the port of the existing system to x86-64 with as little non-essential change as possible. Once this goal is achieved, we can consider various enhancements.

No new graphics cards are supported as of OpenVMS x86-64 V9.2-1. It is possible that additional cards could be considered in the future.

We have started investigating hypervisor interfaces and their management tools, e.g. VMware Tools, but we do not have anything ready for release as of yet.

We are testing vMotion in various configurations. We will publish documentation for the steps necessary to use this vSphere component with VMS guests.

We are looking into possibilities for engaging with the higher education.