Difference: HipeKnownIssuesOld (7 vs. 8)

Revision 82016-04-08 - AlvarGarcia

Line: 1 to 1
 
META TOPICPARENT name="HipeKnownIssues"
<-- ANALYTICS CODE - DO NOT EDIT -->
<-- Google Analytics script BEGIN -->
<-- Google Analytics script END -->
Line: 9 to 9
 
Added:
>
>

Known issues in HIPE 12

SmoothSpectrumTask and ResampleSpectrumTask do not propagate weights correctly

Ticket number: HCSS-15583 (Won't fix)

Versions affected: All versions of HIPE since 8.0

Description: If any of the tasks is applied to a spectrum that contains a column of errors instead of weights, the task converts the errors to weights, propagates the weights and repopulates the error column by converting the propagated weights back to errors. This gives an error column that has not been propagated correctly for the operation applied.

Workaround: Take into account that the output error column should be disregarded in the case of source data that lack a proper error column.

HIPE FITS Writer format could be incorrectly read by other software

Ticket number: HCSS-19248

Versions affected: HIPE 12.x

Description: HIPE makes use of OGIP, a non-standard convention to include long values in FITS fields. The OGIP convention writes an & at the end of a line (for example, describing a column name) that would be considered valid for all FITS compliant readers (it contains less than 68 characters) followed by the special keyword CONTINUE '' / & and completing the long value with a COMMENT Rest of value line. For example:

TTYPE1  = 'designation&'
CONTINUE  '' / &
COMMENT Original element type: char, Empty values appear as null

Many FITS readers ignore this convention, so the special character & is read and included as the final character of the value. Others (IDL) do the same but, additionally, consider & an invalid character and it is replaced by an underscore _.

Workaround: None at this time.

Old syntax using HIFI class PolarPair gives wrong results

Ticket number: none (API change)

Versions affected: HIPE 12.0

Description: PolarPair (the class) use is now discouraged for users and polarPair (the task, note capitalisation) has been modified in order to allow it to be used as a part of the interactive HIFI pipeline and to bring its syntax in-line with other tasks in the Spectral Toolbox. If the old syntax is used, e.g.,

pp = PolarPair(wbs_h, wbs_v)
av = pp.avg()

incorrect results are obtained.

This problem did not affect previous HIPE versions. A warning will be written to the console in HIPE 13 if the old syntax is used.

Workaround: Use the new syntax running the task:

av=polarPair(ds1=wbs_h, ds2=wbs_v)

Limitations of the task for updating scripts that use the old syntax to access return parameters

Ticket number: HCSS-19080 (Won't fix)

Versions affected: HIPE 12.

  • Task calls spanning multiple lines (with \ separation) are not detected. Some SPIRE scripts use multiple line task call scripts.
  • The dialog box should show all lines which are going to be changed, but only the first line is shown there.
  • Some non-standard task calls, like using two = signs to put the result into two variables at once, are not detected.

Old Java 6 entries in the dynamic library environment variable cause HIPE 12 to crash on OS X

Ticket number: HCSS-18891 (Third party; Won't fix)

Versions affected: HIPE 12.

Error message:

$ ./hcss.dp.pacs-12.0.1553/bin/hipe
#
# A fatal error has been detected by the Java Runtime Environment:
#
#  SIGSEGV (0xb) at pc=0x000000010794f647, pid=49119, tid=6403
#
# JRE version: Java(TM) SE Runtime Environment (7.0_45-b18) (build 1.7.0_45-b18)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (24.45-b08 mixed mode bsd-amd64 compressed oops)
# Problematic frame:
# C  [libjvmlinkage.dylib+0x3647]  JVM_GetClassCPEntriesCount+0x17
#
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
# An error report file with more information is saved as:
# /Users/USER/hcss/hcss.dp.pacs-12.0.2340/bin/hs_err_pid49119.log
#
# If you would like to submit a bug report, please visit:
#   http://bugreport.sun.com/bugreport/crash.jsp
# The crash happened outside the Java Virtual Machine in native code.
# See problematic frame for where to report the bug.
#

Root cause: The cause is that the shared library path has references to the old Java 6 installation.

$ env | grep DYLD_LIBRARY_PATH
DYLD_LIBRARY_PATH=:/Applications/MATLAB/MATLAB_Compiler_Runtime/v716/runtime/maci64:
/Applications/MATLAB/MATLAB_Compiler_Runtime/v716/sys/os/maci64:
/Applications/MATLAB/MATLAB_Compiler_Runtime/v716/bin/maci64:
/System/Library/Frameworks/JavaVM.framework/JavaVM:
/System/Library/Frameworks/JavaVM.framework/Libraries

Workaround: The current workaround is to remove the entries that contain JavaVM from this environment variable. To do that, modify or create the .bash_profile configuration file of the terminal and add the environment variable in an export statement without the corresponding entries. That should persist the changes.

 

Known issues in HIPE 11

Reading observations stored as tarball archives that contain new data types does not work in HIPE 11

Line: 20 to 122
 
WARNING: Could not open /home/user/1342219630-herschel.ia.obs.ObservationContext-580157.xml: 
herschel.ia.task.TaskException: Error processing getObservation task: 
Changed:
<
<
Class definition not found for urn urn:hsa:herschel.spire.ia.dataset.PhotApertureEfficiency:0: herschel.spire.ia.dataset.PhotApertureEfficiency
>
>
Class definition not found for urn urn:hsa:herschel.spire.ia.dataset.PhotApertureEfficiency:0: herschel.spire.ia.dataset.PhotApertureEfficiency
  Root cause: The root cause is that the class herschel.spire.ia.dataset.PhotApertureEfficiency (among others) is new for data processed with HCSS 12 and doesn't exist in previous versions of HIPE. Previous versions cannot open tarballs containing these types of data.
Line: 56 to 156
  Error message: Querying the pool using the Product Browser returns nothing (no error message). Opening the observation from the Console using getObservation returns the following error message:
Changed:
<
<
>
>
 herschel.ia.task.TaskException: Error processing getObservation task: Index Version not compatible. Expected : 4 Existing: 6. Pool requires upgrading before you can use it with this software. In order to do so, you need to run pool_name.rebuildIndex() to upgrade. Depending on the size of the pool this process can take a long time, please be patient!
Changed:
<
<
More information can be found in the Data Analysis Guide, section 1.2.2.1 (Update of index format for local stores).
>
>
More information can be found in the Data Analysis Guide, section 1.2.2.1 (Update of index format for local stores).
  Notes about the error message:
  • pool_name is the name of the pool you are trying to load.
Line: 71 to 170
 Workaround: Any of the following commands will rebuild the index of the pool:
  • This ensures we are rebuilding the specified pool using a static method from the class LocalStoreFactory:
Changed:
<
<
LocalStoreFactory.getStore(pool_name).rebuildIndex()
>
>
LocalStoreFactory.getStore(pool_name).rebuildIndex()
 
Changed:
<
<
store.writablePool.rebuildIndex()
>
>
store.writablePool.rebuildIndex()
 

Case 3:

Line: 152 to 245
 

Pools unreadable with Unknown format version: -11 error

If you installed a developer build of HIPE 10 between 10.0.2069 and 10.0.2674 and modified any of your pools you will find those pools are no longer readable in any other HIPE version; when trying to load a pool with getObservation you will get the following error message:

Changed:
<
<
herschel.ia.task.TaskException: Error processing getObservation task: Unknown format version: -11

>
>
herschel.ia.task.TaskException: Error processing getObservation task: Unknown format version: -11

 

See the Data Analysis Guide for a recovery procedure. You can download from this page the first and second script mentioned in the procedure. Change the extension from .py.txt to .py after downloading.

Line: 172 to 263
 

Format change for exported variables

Changed:
<
<
The file format of variables exported with File --> Session --> Export has changed in HIPE 10. This means that HIPE 10 will not be able to read session variables exported with HIPE 9, and vice-versa.
If you exported variables with HIPE 9 or a previous version, follow this workaround to read them in HIPE 10:
>
>
The file format of variables exported with File --> Session --> Export has changed in HIPE 10. This means that HIPE 10 will not be able to read session variables exported with HIPE 9, and vice-versa.
If you exported variables with HIPE 9 or a previous version, follow this workaround to read them in HIPE 10:
 
Changed:
<
<
  1. Open HIPE 9 and import the exported variables with File --> Session --> Import.
  2. Set Save variables on exit in the Preferences dialogue window, under General --> Startup & Shutdown.
  3. Exit HIPE 9.
  4. Open HIPE 10. The variables shall be restored in the Variables view.
  5. Export the variables with the new format of HIPE 10, with File --> Session --> Export.
>
>
  1. Open HIPE 9 and import the exported variables with File --> Session --> Import.
  2. Set Save variables on exit in the Preferences dialogue window, under General --> Startup & Shutdown.
  3. Exit HIPE 9.
  4. Open HIPE 10. The variables shall be restored in the Variables view.
  5. Export the variables with the new format of HIPE 10, with File --> Session --> Export.
 

HIPE cannot send data to an external application via SAMP

Line: 268 to 359
 

Problem with PACS background normalization script

Changed:
<
<
The ChopNodBackgroundNormalizationRange interactive pipeline script (Pipelines --> PACS --> Spectrometer --> Chopper large range scan SED --> Background Normalization) is missing a definition of the variable target. After loading the observation, assuming it is stored in a variable called obs, you can define the missing target variable as follows:
>
>
The ChopNodBackgroundNormalizationRange interactive pipeline script (Pipelines --> PACS --> Spectrometer --> Chopper large range scan SED --> Background Normalization) is missing a definition of the variable target. After loading the observation, assuming it is stored in a variable called obs, you can define the missing target variable as follows:
 
Changed:
<
<
target = obs.meta["object"].value.replace(" ","_")
>
>
target = obs.meta["object"].value.replace(" ","_")
 

PACS task specFlatFieldLine problem with NaN values

Line: 284 to 376
  if not cube.containsMask("INVALID"): cube.addMask("INVALID","Invalid signal values") cube.setMask("INVALID",invalid)
Changed:
<
<
slicedCube.replace(i,cube)
>
>
slicedCube.replace(i,cube)
 

HIPE hangs when pressing the activate/deactivate button in the mask panel of the Spectrum Explorer PACS extensions

Line: 313 to 404
 

HIPE crash linked to JAMA (Java Matrix) library

HIPE may crash without warning when executing parts of the bundled JAMA library. If you experience a sudden crash when running a script, this may be the cause, even if there are no obvious references to matrices in the script. Take the following steps to prevent the crash from happening:

Changed:
<
<
  1. Go to the .hcss directory in your open directory. Open the hipe.props file, or create it if it does not exist.
  2. Add the following line to the file:
    java.vm.options = -XX:CompileCommand=exclude,Jama/LUDecomposition,<init>
  3. Restart HIPE.
>
>
  1. Go to the .hcss directory in your open directory. Open the hipe.props file, or create it if it does not exist.
  2. Add the following line to the file:
    java.vm.options = -XX:CompileCommand=exclude,Jama/LUDecomposition,<init>
  3. Restart HIPE.
 

Problem with transpose task

Line: 323 to 414
 

Velocity map creation in Legacy Cube Spectrum Analysis Toobox (CSAT) fails

Changed:
<
<
The creation of velocity maps with the Legacy CSAT fails; the task will hang without an error message. CSAT is no longer supported and the new cube toolbox, which uses an improved algorithm for velocity map making, should be used instead. See 6.8.3 in the Herschel Data Analysis Guide for information about using the new cube toolbox.
>
>
The creation of velocity maps with the Legacy CSAT fails; the task will hang without an error message. CSAT is no longer supported and the new cube toolbox, which uses an improved algorithm for velocity map making, should be used instead. See 6.8.3 in the Herschel Data Analysis Guide for information about using the new cube toolbox.
 

Flag set in HIFI's flagTool task do not appear to be used by the fitBaseline task

Line: 336 to 426
 
calTree = getCalTree(time=frames.startDate)
frames=photAssignRaDec(frames,calTree=calTree)
Changed:
<
<
convertL1ToScanam(frames,assignRaDec=False)
>
>
convertL1ToScanam(frames,assignRaDec=False)
  Fixed in HIPE 9.2.
Line: 354 to 443
 
ScanSpeedMask = frames.getMask('ScanSpeedMask')[0,0,:] 
indexSpeed    = ScanSpeedMask.where(ScanSpeedMask==False)
Changed:
<
<
map = photProject(frames.select(indexSpeed), calTree=calTree,calibration=True)
>
>
map = photProject(frames.select(indexSpeed), calTree=calTree,calibration=True)
 This bug shall be corrected in the next Hipe versions.

Known issues in HIPE 8.1

Line: 393 to 482
  cygwinpopup.png
Changed:
<
<
  • Solved known issue with step by step execution. The known issue found in HIPE 7.0 with step by step execution has been solved. You can revert to the old behaviour by choosing Edit --> Preferences, going to Editors & Viewers --> Jython Editor and unticking the Use improved step by step execution checkbox.
>
>
  • Solved known issue with step by step execution. The known issue found in HIPE 7.0 with step by step execution has been solved. You can revert to the old behaviour by choosing Edit --> Preferences, going to Editors & Viewers --> Jython Editor and unticking the Use improved step by step execution checkbox.
 
  • HIPE drops Versant database connection under Windows XP: A problem can occur if HIPE is run under Windows XP connected via WiFi to a Versant database server. If the connection is interrupted briefly, the HIPE session may fail with a "Network layer read error". Linux does not have this problem because it keeps retrying for several minutes.
Line: 398 to 487
 
  • HIPE drops Versant database connection under Windows XP: A problem can occur if HIPE is run under Windows XP connected via WiFi to a Versant database server. If the connection is interrupted briefly, the HIPE session may fail with a "Network layer read error". Linux does not have this problem because it keeps retrying for several minutes.

  • Garbled text after horizontal scrolling in Ubuntu 11.10. A problem on systems with NVidia cards and running Ubuntu 11.10 can cause text to appear garbled when scrolling horizontally, as shown by the figure below. To solve the problem, add the following property to the user.props file, located in the .hcss directory within your home directory. If there is no user.props file, create it.
Deleted:
<
<
 
Changed:
<
<
java.vm.options = -Dsun.java2d.opengl=true
>
>
java.vm.options = -Dsun.java2d.opengl=true
  garbledtext.png
Line: 422 to 509
 
  • Cannot download data from the Herschel Archive due to corrupted cache.

Error messages about product cache corruption may vary. One example is the following:

Deleted:
<
<
 
Changed:
<
<
herschel.share.util.ConfigurationException: Failed to locate product in cache
>
>
herschel.share.util.ConfigurationException: Failed to locate product in cache
  To solve the problem, try one or more of the following:
Changed:
<
<
  • Choose Edit --> Preferences and go to Data Access --> Storages & Pools. Select a pool in the Pools pane and click the Clear cache button. Note that this button will not be available if there are no pools listed in the Pools pane.
>
>
  • Choose Edit --> Preferences and go to Data Access --> Storages & Pools. Select a pool in the Pools pane and click the Clear cache button. Note that this button will not be available if there are no pools listed in the Pools pane.
 
  • If the above does not work, clear the HSA cache manually by deleting the home/.hcss/pal_cache/hsa directory, where home is your home directory.
  • If the above does not work, clear all caches manually by deleting the home/.hcss/pal_cache directory.
Line: 443 to 527
 
for i in range(3):
  print i
Changed:
<
<
print "Finished"
>
>
print "Finished"
  The line by line execution will run the for loop and the print "Finished' statement in the same step. A workaround for this behaviour is to add a pass command after the end of the compound statement. The pass command will be executed in the same step as the for loop, but without any effect:
Deleted:
<
<
 
for i in range(3):
  print i
Line: 452 to 534
 for i in range(3): print i pass
Changed:
<
<
print "Finished"
>
>
print "Finished"
 

Known issues in HIPE 6.1

Line: 472 to 553
 The mosaic task fails if your default language uses a different delimiter for decimal numbers than English (for example, a comma rather than a dot).

A workaround is to add the following line to your hipe.props file:

Deleted:
<
<
 
Changed:
<
<
java.vm.options = -Duser.language=en -Duser.region=en
>
>
java.vm.options = -Duser.language=en -Duser.region=en
  The file should be in the .hcss subdirectory of your home directory. If the file does not exist, create it. If the file already contains a line defining the java.vm.options property, you can add the two values -Duser.language=en and -Duser.region=en at the end of the line, separated by a space from the other values.
Line: 488 to 567
 When using the Product Browser perspective, a query with no results creates an empty product called p in the Variables view.

Steps to reproduce:

Changed:
<
<
  1. In the Product Browser view, select a query source and search for an obsid that you know not to exist.
  2. In the Variables view, variables QUERY_RESULT and p appear.
  3. The QUERY_RESULT variable is correctly empty, as you can verify by issuing the command print QUERY_RESULT in the Console view.
  4. The p variable should not have been returned. You can safely delete it.
>
>
  1. In the Product Browser view, select a query source and search for an obsid that you know not to exist.
  2. In the Variables view, variables QUERY_RESULT and p appear.
  3. The QUERY_RESULT variable is correctly empty, as you can verify by issuing the command print QUERY_RESULT in the Console view.
  4. The p variable should not have been returned. You can safely delete it.
  See the Data Analysis Guide for more information on the Product Browser perspective.
Line: 498 to 577
 
  • Slow startup due to error messages.

You may see many errors like the following in the command line window from which you are starting HIPE. These errors cause a considerable delay in HIPE startup:

Added:
>
>
 
Changed:
<
<
31-Jan-11 10:00:17.797 WARNING NavigatorView: Error checking file /bin: java.util.concurrent.TimeoutException
>
>
31-Jan-11 10:00:17.797 WARNING NavigatorView: Error checking file /bin: java.util.concurrent.TimeoutException
 Workaround: set the property hcss.hipe.refreshPeriod as follows:
Added:
>
>
 
Changed:
<
<
hcss.hipe.refreshPeriod = 3600000
>
>
hcss.hipe.refreshPeriod = 3600000
 See the HIPE Owner's Guide for information on how to set properties.

Known issues in HIPE 5.3

Line: 522 to 603
 

Core

  • You may see many errors like the following in the command line window from which you are starting HIPE. These errors cause a considerable delay in HIPE startup:
Added:
>
>
 
Changed:
<
<
31-Jan-11 10:00:17.797 WARNING NavigatorView: Error checking file /bin: java.util.concurrent.TimeoutException
>
>
31-Jan-11 10:00:17.797 WARNING NavigatorView: Error checking file /bin: java.util.concurrent.TimeoutException
 Workaround: set the property hcss.hipe.refreshPeriod as follows:
Added:
>
>
 
Changed:
<
<
hcss.hipe.refreshPeriod = 3600000
>
>
hcss.hipe.refreshPeriod = 3600000
 See the HIPE Owner's Guide for information on how to set properties.
Changed:
<
<
  • An error like the following can sometimes appear in the command line window from which you are starting HIPE. This error is harmless and can be ignored. May also affect older HIPE versions. Fixed in HIPE 6.0.
>
>
  • An error like the following can sometimes appear in the command line window from which you are starting HIPE. This error is harmless and can be ignored. May also affect older HIPE versions. Fixed in HIPE 6.0.
 
Exception in thread "AWT-EventQueue-0" java.util.ConcurrentModificationException
        at java.util.AbstractList$Itr.checkForComodification(AbstractList.java:372)
        at java.util.AbstractList$Itr.next(AbstractList.java:343)
        at herschel.ia.gui.kernel.parts.impl.AbstractSite.opened(AbstractSite.java:468)
Changed:
<
<
...
>
>
...
 
  • Caches of PAL pools (most often used for accessing the HSA) can be corrupted when switching back and forth between HIPE v5.1 and previous versions. The result of this corruption is that the cache can silently give you other data than the data you were asking for. If you are switching back and forth between 5.1 and previous versions, clear your caches in the Data Access Preferences panel: From the Edit menu, select Preferences. Under Data Access, locate the Pools and Storages section. In the bottom right section of the panel, select the pool that has a cache enabled and click "Clear cache". Fixed in HIPE 6.0 and 5.3.
Line: 566 to 649
 

PACS

Changed:
<
<
  • The following scripts in the Pipeline --> PACS menu in HIPE are known to crash: extended_pipeline.py, L1_smallSource.py, L2_scanMap.py. Other scripts may crash or not be applicable because they depend on the results of other non-working scripts. Many scripts are not adequately commented.
>
>
  • The following scripts in the Pipeline --> PACS menu in HIPE are known to crash: extended_pipeline.py, L1_smallSource.py, L2_scanMap.py. Other scripts may crash or not be applicable because they depend on the results of other non-working scripts. Many scripts are not adequately commented.
 
  • The data/pcal/PCalSpectrometer_ArrayInstrument_FM_v5.fits calibration file, dealing with spectrometer spatial calibration, contains incorrect values. This results in offsets of the order of one arcsecond between reported coordinates and commanded positions.
  • Some tasks may be holding data products in memory unnecessarily, thus slowing down HIPE and possibly causing out of memory errors.
  • In the photProjectPointSource task, the calibration = false mode (no longer recommended) is not flux conserving.
Line: 577 to 660
 
  • The AbstractMapperTask may treat incorrectly a Wcs provided as input. When the input Wcs is defined using the CDELTi keywords, the result is correct, but when the pixels are defined with the CDi_j keywords, these are translated into CDELTi and CROTA2 keywords. This is not equivalent, if not rectangular pixels are required.
  • The AbstractMapperTask has a bug in the generation of tod files which can result in an ArrayIndexOutOfBoundsException at the map generation.
  • The spireCal task outputs an empty calibration context if the input pool does not exist, instead of reporting the problem. This results in errors when user scripts attempt to access the calibration files.
Added:
>
>

 
Deleted:
<
<

 
<-- COMMENT BOX CODE - DO NOT EDIT -->
Added:
>
>
 
Changed:
<
<
blog comments powered by Disqus
>
>
blog comments powered by Disqus
 
<-- END OF COMMENT BOX CODE -->
Changed:
<
<

>
>

 
META FILEATTACHMENT attr="h" autoattached="1" comment="" date="1357211057" name="cygwinpopup.png" path="cygwinpopup.png" size="28796" user="Main.DavideRizzo" version="1"
META FILEATTACHMENT attr="h" autoattached="1" comment="" date="1357211083" name="garbledtext.png" path="garbledtext.png" size="18498" user="Main.DavideRizzo" version="1"
 
This site is powered by the TWiki collaboration platform Powered by Perl