V9.0 Q&A

QUESTIONS LIST

The following are some of the questions asked during the webinar about the rollout of OpenVMS V9.0 on x86. We may have more events dedicated to OpenVMS on x86, but if you have a question, feel free to ask it on the forum.
  1. How can I determine if my Alpha or Itanium image uses VAX floating point?
  2. How are frame pointers used in the x86 Calling Standard?
  3. My Pascal app uses 32 bit pointers, do I need to change the app to use 64 bit pointers?
  4. How do I migrate a COBOL program from VAX to an OpenVMS x86 virtual machine?
  5. Can I get a copy of the Calling Standard?
  6. Will there be LT: devices?
  7. Do all the lexical functions exist on OpenVMS x86?
  8. Can OpenVMS V9.2 on x86 be a Container in Docker?
  9. Can an OpenVMS VM instance access physical tape devices and other SAN based storage?
  10. When will ACMS be available on OpenVMS x86?
  11. Will you support Hyper-V?
  12. Will you have documentation on device drivers and memory management?
  13. Can I cluster VAX 7.3 with OpenVMS x86?
  14. Will mixed architecture clusters be supported?
  15. What is the maximum number of CPUs supported by OpenVMS x86?
  16. What is the maximum volume capacity on 9.0?
  17. When will you have Clusters and Volume Shadowing on x86?
  18. Do you have performance data comparing Alpha, Itanium, x86?
  19. Will RMS record size be expanded beyond 32K?
  20. Will VSI support new graphics cards on x86?
  21. Will you have binary translators from VAX to x86, Alpha to x86, Itanium to x86?
  22. Will hypervisor tools used to manage VM guests work with OpenVMS x86?
  23. Will VMware “Vmotion” work?
  24. Are we working with universities to introduce OpenVMS in college curriculums?
  25. Code and Data Memory Placment in OpenVMS

QUESTIONS & ANSWERS

  1. How can I determine if my Alpha or Itanium image uses VAX floating point?

    That is actually an interesting question as you can mix-and-match both kinds in the same image.

    Alpha compilers are VAX float by default but you can ask for IEEE floating (and there are several sub-styles of IEEE floating that we won't discuss here). Alpha hardware supports both VAX and IEEE.

    Itanium compilers switched the default to IEEE but you can ask for VAXfloating. Itanium hardware only supports IEEE and VAX floating is performed with additional software.

    Unless you use the /FLOAT qualifier (/REALSIZE for BASIC), you probably don't care as any prior migration from Alpha to Itanium would have exposed any dependency. Only applications that used floating binary data files really cared about using the same bit representation. And for those customers, we have some general purpose conversion routines(CVT$CONVERTFLOAT) to assist customers today with on-disk conversions of binary floating data.

    The LINKER MAP files on Itanium show what kind of floating point is used in each object module. The compiler recorded it in each object module. That does not exist on Alpha.

    For 9.0 customers, there are still some open issues surrounding LIB$WAIT, CVT$CONVERTFLOAT, and LIB$CVTFTO/FROMINTERNALTIME. Future implementation of the VAX floating support will restore their old behavior but conversion today to IEEE floating on Alpha/Itanium will avoid the issue and improve performance.

  2. How are frame pointers used in the x86 Calling Standard?

    In the x86 world, there are two major calling standards. The one that Windows uses and everybody else. OpenVMS uses the "everybody else" model (ie the AMD64 model that is used by Linux with a few small upwards compatible additions documented in the OpenVMS Calling Standard).

    In the Linux x86 model, the frame pointer register (much like the VAX FPand Alpha FP) is used by the compiler to access stack-local storage. Such details are usually invisible to the vast majority of code.

    On Linux, for small leaf routines, the compilers/linker can omit setting up the frame pointer as an optimization. That is the same concept as JSB routines on VAX and null-frame routines on Alpha and Itanium. At present on x86, we always generate frame pointers. They cost almost nothing to create and make the exception handling code easier toimplement. In the future, I suspect we will always create frame pointers just like we'll always create PIC (position independent code) code.

  3. My Pascal app uses 32 bit pointers, do I need to change the app to use 64 bit pointers?

    Since the stack and static data on x86 is still allocated in 32-bit address space, a 32-bit pointer is sufficient to point to them.

    With the discussion of moving code to 64-bit space, the question of "what size pointer do I need to point to a routine?" comes up. In the Linux world, you DO need a 64-bit pointer to point to 64-bit code. However, given the "recompile and go" approach we are taking with OpenVMS (and the tons of legacy code that might have only allocated a 32-bit variable to hold a procedure value), we have decided to implement linker-generated trampoline routines that continue to live in 32-bit address space. These trampoline routines can be pointed to by 32-bitpointers and are similar in concept to the Alpha Procedure Descriptor and Itanium Function Descriptor data structures. Code that takes the address of routine will get a 32-bit pointer value to one of these linker-generated trampoline routines.

  4. How do I migrate a COBOL program from VAX to an OpenVMS x86 virtual machine?

    Aside from the question about binary translators, what about moving code all the way from VAX to x86?

    It is exactly the same as moving code from VAX to Alpha years ago. For most of the compilers, most of the programs are "recompile and go". COBOL does have some minor differences that are listed in an appendix in the COBOL User Manual. Some VAX languages needed many changes (all those ".CALL_ENTRY" directives added to ALL Macro-32 source files for example).

    The VAX to Alpha transition was influenced by the fact that most of our VAX compilers had their own code generators while Alpha and later compilers were re-engineered to use the GEM common code generator.

    If code has already been ported to either Alpha or Itanium, we expect most programs to "recompile and go" as noted by Clair's presentation.

    C++ will be an exception as the number of OpenVMS-isms that we'll put into clang will be a yet-to-be-determined subset of those in our Itanium C++ compiler. We did the same kind of "reduction" moving from Alpha C++ to Itanium C++. We'll use a "customer driven" approach instead of just doing everything we did before.

  5. Can I get a copy of the Calling Standard?

    Yes. Please visit this page.

  6. Will there be LT: devices?

    Yes, LT: devices will be supported. In general, we plan to have all currently supported network options available for OpenVMS x86-64.

  7. Do all the lexical functions exist on OpenVMS x86?

    All lexical functions provided by OpenVMS on Alpha and Itanium will be available on OpenVMS x86-64 and lexical functions that return architecture-specific information will be updated to return the correct information for the x86-64 platform.

  8. Can OpenVMS V9.2 on x86 be a Container in Docker?

    This has not been investigated. As our x86 support evolves, we will look into this and other deployment options.

  9. Can an OpenVMS VM instance access physical tape devices and other SAN based storage?

    We will publish a paper on access to storage devices from the VMs we support.

  10. When will ACMS be available on OpenVMS x86?

    Our current plan is to have all layered products, one of which is ACMS, available for the V9.1 EAK. We will keep you updated on our progress.

  11. Will you support Hyper-V?

    We are committed to supporting Oracle VirtualBox, KVM, and VMware in the first production release V9.2. Hyper-V is next on the list but we cannot yet commit to it for V9.2.

  12. Will you have documentation on device drivers and memory management?

    Existing documentation will be updated for x86-64 and new documentation will be created where necessary. This will include documentation covering important areas such as device drivers and memory management.

  13. Can I cluster VAX 7.3 with OpenVMS x86?

    Clustering of VSI OpenVMS for x86-64 with VAX/VMS systems will not be supported for any versions of VAX/VMS.

  14. Will mixed architecture clusters be supported?

    Mixed architecture clusters of Alpha, Itanium, x86-64 systems will be supported for VSI OpenVMS. As per the previous question, clustering with VAX/VMS systems will not be supported.

  15. What is the maximum number of CPUs supported by OpenVMS x86?

    The maximum number of CPUs (physical or virtual) supported by OpenVMS x86-64 V9.2 is 64. Supporting more than 64 CPUs may be in a future release.

  16. What is the maximum volume capacity on 9.0?

    The maximum volume capacity on 9.0 is the same as for Alpha and Itanium today (2 TB). Looking to the future, the new file system currently under development will support volumes of up to 100TB, although the maximum file size will be unchanged.

  17. When will you have Clusters and Volume Shadowing on x86?

    Clustering and volume shadowing will be available in the V9.1 EAK release of VSI OpenVMS x86-64.

  18. Do you have performance data comparing Alpha, Itanium, x86?

    We have done no performance work as yet. Performance analysis will start once we have all the system components in place for V9.1 and we have native, optimized compilers. We will publish the results of our work.

  19. Will RMS record size be expanded beyond 32K?

    There are no plans to expand the maximum RMS record size. The current focus is on completing the port of the existing system to x86-64 with as little non-essential change as possible. Once this goal is achieved, we can consider various enhancements.

  20. Will VSI support new graphics cards on x86?

    No new graphics cards will be supported in OpenVMS x86-64 V9.2. It is possible that additional cards could be considered in the future.

  21. Will you have binary translators from VAX to x86, Alpha to x86, Itanium to x86?

    We created a prototype binary translator for Alpha to x86-64, however there are no plans at this time to develop this further. Any additional work will be evaluated based on customer requirements.

  22. Will hypervisor tools used to manage VM guests work with OpenVMS x86?

    We are currently focused on getting OpenVMS to run as a guest in selected virtual machine environments. We have not as yet done any investigating of hypervisor guest management tools. That will come after we have the complete OpenVMS environment running.

  23. Will VMware “Vmotion” work?

    In theory, there is no reason to believe that this will not work. As in the answer to #22, we will test it in due time.

  24. Are we working with universities to introduce OpenVMS in college curriculums?

    We are revising possibilities for engaging with the higher education after OpenVMS V9.2 is released.

  25. Code and Data Memory Placment in OpenVMS

    For the port to x86-64, we made the decision to move the code out of the 32-bit address space as a way to make room for more static data and heap storage. It is a partial solution for these traditional 32-bit programs that are running out of room. It isn't a perfect solution as it does not address the size of the stack, but it is a good solution for several applications that were unwilling/unable to modify their programs to use 64-bit heap storage.

    One additional question that now comes up is "If the code is now in 64-bit space, do I need a 64-bit pointer to point to it?" On Alpha and Itanium, function values are pointers to "descriptors" and not the code. The code address is one of the fields in those descriptors and has always been a 64-bit pointer. These descriptors are allocated in 32-bit memory so they can be pointed to with a 32-bit pointer. However, on x86, function values are indeed pointers to code. It is expected that you can just do an indirect call with a function value. Since there are assumptions that a function value is only 32-bits in size, we could not simply change function values to be 64-bit pointers without causing source code changes. To solve this problem, the linker creates small trampoline routines that are allocated in the 32-bit P0 address range which then transfer control to the actual code that resides in 64-bit space.