Pro_Pack : Optimiser

ProPack optimiser module manages the health of assets and your Maya scene via a complex Health object system. The HealthValidation class runs tests that are bound to it, formatted such that each test returns a HealthTestObject. These objects contain both the results of the test, what failed, what passed and where ever possible they also have a fix method bound to them.

This is a crucial part of the ProPack systems for clients, dealing not just with overall optimisation methods and cleanups, but also a complex health runner setup.

The key to all of the testing systems are HealthTestObjects formatted in a specific manner and passed into the HealthValidation class to run and collate.

Each test makes and returns an instance of a HealthTestObject and is self contained, they can be run standalone without the HealthValidation class to test specific things, or grouped into a bigger set of tests, for example to health check a rig prior to releasing to an animator.

>>> # import statement for the module via the r9pro decompiler
>>> from Red9.pro_pack import r9pro
>>> r9pro.r9import('r9popt')
>>> import r9popt
>>> def simple_example_test(expected=False):
...     '''
...     This is an example of how to write correctly formatted Health tests
...     for the Validation systems based on simply matching a predictable input result.
...     This is how all the Maya environment tests are built in the ProPack
...
...     In simple tests like this we can just compare the results with the expected data
...     such that HealthObject.results_returned==HealthObject.results_expected
...
...     :param expected: status of the test expected
...     '''
...     # we make a fresh instance of a HealthTestObject, this is crucial
...     HealthObject = r9popt.HealthTestObject(name='simple_example_test')
...
...     # set the internal results_expected to that arg passed in
...     HealthObject.results_expected = expected
...
...     # bind a fix method is you have one written
...     # HealthObject.fix_method=maya_timeUnits_fix
...
...     # run the actual test and push it's result to the HealthObject
...     HealthObject.results_returned = cmds.about(q=True, batch=True)
...
...     # do the simple compare by calling the set_byCompare func
...     # this sets the status internally and sets the test as having been run
...    HealthObject.set_byCompare()
...
...    # we ALWAYS have to return the HealthObject
...    return HealthObject
>>> def custom_example_test(expected):
...         '''
...         more complex test, most of the time we don't know what exactly
...         is in the scene before we run the test so the above example is useless.
...         In this example we're constructing the test and managing the results
...         directly
...
...         :param expected: a list of nodes we expect to be present
...         '''
...         # as above, take an instance of the HealthObject and set the initial vars
...         HealthObject=r9popt.HealthTestObject(name='custom_example')
...         HealthObject.results_expected=expected
...
...         for node in expected:
...             # run our test
...             if not cmds.objExists(node):
...                 # test failed so we add the failed nodes to the results_failed list
...                 HealthObject.results_failed_nodes.append(node)
...                 # we can also set the log info, displayed when we get the overall status
...                 HealthObject.log_message+='failed node :  "%s" : Missing in Scene' % node
...             else:
...                 # test passed so add the passed nodes to the results
...                 HealthObject.results_passed_nodes.append(node)
...         # if we have failed nodes then the test failed and we need to set it's status as such
...         # by default the HealthObject is set as failed
...         if not HealthObject.results_failed_nodes:
...             HealthObject.set_passed()
...         # return the HealthObject
...         return HealthObject

When we put this together with the HealValidation system we get a very powerful way of checking data inside Maya. In Pro for clients this is also bound to the fileOpen callback, allowing us to test the scene after load against a given project template.

>>> class Client_SceneOpen_HealthValidation(r9popt.HealthValidation):
>>>     def __init__(self, *args, **kwargs):
...         # Tests to run, note this is a list of tuples where the second
...         # arg is the expected result if specified
...         self.tests=[(r9popt.maya_sceneUnits_test, {'expected':'centimeter'}), # Maya environment units
...                     (r9popt.maya_timeUnits_test, {'expected':'ntsc'}),  # Maya environment fps 'ntsc' 30fps
...                     (r9popt.maya_upAxis_test, {'expected':'y'})]  # Maya environment world axis Y-up
>>>
>>> test=Client_SceneOpen_HealthValidation()
>>> test.run_health()
>>>
>>> test.getStatus()  # pass or fail?
>>> print test.prettyPrintStatus()  # note this is formatted as a string for output to file
>>> test.writeStatus(filepath)  # output the test results to file
>>>
>>> # fix anything that failed
>>> test.run_fix_methods()

You could of course just take an instance of the default HealthValidataion object and fill up the internal tests list but it’s often easier to have a bespoke class to test a specific set of data

>>> test=r9popt.HealthValidation()
>>> test.tests=[(r9popt.maya_sceneUnits_test, {'expected':'centimeter'}),
...                     (r9popt.maya_timeUnits_test, {'expected':'ntsc'}),
...                     (r9popt.maya_upAxis_test, {'expected':'y'})]
>>> test.run_health()

Optmizer module for dealing with the health of a scene

Main Classes

OptimizerUI()
OptimizeScene() The OptimizeScene class is responsible for wind optimization of the current Maya Scene.
HealthTestObject(name) HealthTestObject is a base class object that should be managed and returned by all of the Health Test functions.
HealthValidation(*args, **kwargs) This is the main Health Test system for all of ProPack and Red9’s internal publisher system.
optimizer_ui()

stub for the pro openUI call

class OptimizerUI

Bases: object

classmethod show()
class OptimizeScene

Bases: object

The OptimizeScene class is responsible for wind optimization of the current Maya Scene.

when initializing the class we expose a number of internal options to allow you to tailor how the optimize_full system deals with your scene

optimize_base(pre_post=True, *args)

Base Maya Optimizer’s with a few extra cleanups thrown in for good measure

optimize_full(runBase=True, *args)

Optimize by exporting all DagObjects to a temp file then importing the data and running a base optimize on the remaining data.

Parameters:runBase – do we run the base optimiser func first?
..note ::
any additional nodes you need to retain during the process, such as non-dag nodes like Audio Nodes need to be handling either by adding a selection method and adding that in the self.get_retained_funcs list, OR adding to the initial self.nodes_to_retain list
get_nodes_to_retain()

nodes to include in the rootNode list that gets exported during the optimize_full function call. This is so that we can add in additional non DAG nodes such as audio nodes into the temp export to ensure everything we want to retain is kept.

delete_format_output(func)

simple decorator to format the return prints, a fix method should return a list of nodes / data that it’s fixed and this decorator prints that status out after the fix method has been run.

class HealthTestObject(name)

Bases: object

HealthTestObject is a base class object that should be managed and returned by all of the Health Test functions. It’s an encapsulated status object for a particular test method, holding results from the test and fix methods.

>>> test=opt.maya_timeUnits_test(expected='ntscf')
>>> test.print_status
>>> test.run_fix_method()
Internal Vars of note:
  • self.fix_method=None # fix function for this test object
  • self.results_failed_nodes=[] # nodes deemed to have failed the test, where appropriate
  • self.results_passed_nodes=[] # nodes that passed the test
  • self.results_returned=None # results used to compare against if the results_expected var was given
  • self.results_expected=None # if specified this is the result expected from the test method.
  • self.log_message=’’ # during the tests we generally fill this with useful info for the print feedbacks
run_fix_method()

If this test object has already run and has a fix method bound, and the test failed then run the fix method.

has_run

has this HealthObject been run as part of a test

selectFailedNodes()
print_status()
status

wrapper for the status of the object, if the test passed, failed or threw a warning. This also compares any results_expected with the data from the test so we can compare against an expected set of data.

Note

Test.status=’PASSED’ if either the test.set_passed() has been run internally, or if self.results_expected == self.results_returned. This allows for simple matching of default data such as Maya environment calls etc. For more complex status checking I’d recommend doing that inside your test function and setting the status there using the set_passed etc.

set_byCompare()

automatically set the status based on comparing the results_expected==results_returned This is generally for simple test methods where you can guarantee the results are consistent.

<<<<<<< HEAD
set_by_failed()

automatically set failed if there are failed results in the test object

======= >>>>>>> d7ab8a039c4da0838a07bf4a9ec3ad957667b21e
set_failed()

set test as failed, note this also sets the has_run flag to True

set_passed()

set test as passed, note this also sets the has_run flag to True

set_warning()

set test as passed, note this also sets the has_run flag to True

set_corrupt()

set test as passed, note this also sets the has_run flag to True

failed
passed
class HealthValidation(*args, **kwargs)

Bases: object

This is the main Health Test system for all of ProPack and Red9’s internal publisher system. The idea is that you bind individual test to the class to run, each test returns a HealthTestObject which means we can easily collate data and fix issues via the run_fix_methods

In Pro for clients this is also bound to the fileOpen callback, allowing us to test the scene after load against a given project template.

See examples in the API docs above

bindTests(testdata=[], filepath=None)

bind a custom set of tests from an external config list, tests are specified as a list of tuples where the first value is the function, the second is a list of kws to pass into the function, usually expectedResult etc. If there are no kws to pass just set that to None

  • ie : [(func, **kws), (func, **kws), (func, None)]
Parameters:
  • testdata – [] list of tuples as above
  • filepath – path to a json file with the tests setup and correctly formatted
loadTests(filepath)

load in tests from a file for ease

Parameters:filepath – filepath to take the tests from . Not currently implemented
run_health(verbose=True, detailed=False)

run the given set of test objects

Parameters:
  • verbose – run the printouts or run it silently
  • detailed – turn on all the detailed results feedback in the printouts
run_fix_methods()

run all fix methods bound to those HealthTestObjects that failed

run_customFuncs()

run any custom functions bound to this test setup

getFailedTests(asDict=False)

for each healthObject validate it’s status and return those HealthTestObjects that failed and or threw a warning or error

Parameters:asDict – either return a straight list of failed HeallthTestObjects or a dict where the key is the test name
getStatus(asDict=False, verbose=False)

for each healthObject validate it’s status and return the overall result

Parameters:
  • asDict – If False (default) return the overall status of all the tests as a bool, if True return a dict where the keys are the status and value is a list of tests matching that status
  • verbose – if true print all the results out, else just return the status itself
prettyPrintStatus(overview=True, detailed=False, passed=True, indepth=True)

Simple formatter to make sense of the results in a manner that can be either printed to screen or out to file. It’s worth testing the flags here as there is a LOT of formatting possible to tweak the results into a format that contains just the base data, or all the comples fails etc

Parameters:
  • overview – just print / return the overview of each test status - no results data
  • detailed – turn on the detailed results in the feedback data
  • passed – include passed tests in the detailed data
  • indepth – expand the detailed results to include all the health results data
writeStatus(filepath=None, indepth=False)

write a detailed log out to a given filepath

Parameters:
  • filepath – filepath for the log file
  • indepth – expand the detailed results to include all the health results data
class RigHealthValidation(*args, **kws)

Bases: Red9.pro_pack.core.optimiser.HealthValidation

TestClass to see if the extraction of class inheritance is running for the optimiser Health system

get_metaRigs()

generic mrig call for all base tests

get_rootJnt()

generic call for all tests dealing with skeleton roots

maya_timeUnits_test(expected='ntsc')

Maya Units: time is as expected

maya_timeUnits_fix(HealthObject)
maya_sceneUnits_test(expected='centimeter')

Maya Units: sceneUnits are es expected

maya_sceneUnits_fix(HealthObject)
maya_upAxis_test(expected='y')

Maya Units: upAxis is as expected

maya_upAxis_fix(HealthObject)

fix the upAxis to the expected result

maya_evalManager_test(expected='off')

Maya Eval Manager: off, parallel or serial (which should never be run in production only for debugging)

maya_evalManager_fix(HealthObject)

fix the evalManager graph to the expected result

maya_timerange_test(expected=(0, 1))

Maya Eval Manager: off, parallel or serial (which should never be run in production only for debugging)

maya_timerange_fix(HealthObject)

fix the evalManager graph to the expected result

nodes_exist_test(expected=[])

Check in the Maya scene for a pre-existing set of nodes

meta_nodes_get(*args)

simple get for all metaNodes in the scene

meta_nodes_invalid_test(expected=False)

test for metaNodes in the scene which are deemed inValid by their own internal mNode.isValid() call. Usually if they have no connections to other nodes. Failed Results built : HealthObject.results_failed_nodes : [mRig,...]

Parameters:expected – NA - unused
meta_nodes_invalid_fix(*args, **kwargs)

find all inValid metaNodes by deleting them via their internal mNode.delete() call This fix method is bound in such a way that it will run the test method internally if not already run.

mRigs_haveKeys_test(expected=False, mRigs=[])

Test to check is all mRigs in the scene have existing keys or not Failed Results built : HealthObject.results_failed_nodes : [mRig,...]

Parameters:
  • expected – if we’re expecting keys or not
  • mRigs – specific mRig metaNodes only if passed
mRigs_haveKeys_fix(*args, **kwargs)

Remove all keys found and reset / zeropose the rig This fix method is bound in such a way that it will run the test method internally if not already run.

mRigs_zeroPose_test(expected=None, mRigs=[])

Test to check is all mRigs in the scene have existing zeroPoses Failed Results built : HealthObject.results_failed_nodes : [mRig,...]

Parameters:
  • expected – NA - unused
  • mRigs – specific mRig metaNodes only if passed
mRigs_zeroPose_fix(*args, **kwargs)

Add a ZeroPose to the Rig in it’s current state This fix method is bound in such a way that it will run the test method internally if not already run.

mRigs_attrMap_test(expected=None, mRigs=[])

Test to check is all mRigs in the scene have existing attrMaps Failed Results built : HealthObject.results_failed_nodes : [mRig,...]

Parameters:
  • expected – NA - unused
  • mRigs – specific mRig metaNodes only if passed
mRigs_attrMap_fix(*args, **kwargs)

Add a ZeroPose to the Rig in it’s current state This fix method is bound in such a way that it will run the test method internally if not already run.

mRigs_renderMeshes_hooked(expected=1, mRigs=[])

Test to check is all mRigs in the scene have existing renderMeshes hooked up or at least the expected number or above. This wire is used by the PoseSaver to create the PosePointCloud visual mesh reference Failed Results built : HealthObject.results_failed_nodes : [mRig,...]

Parameters:
  • expected – minimum number of expected meshes hooked up
  • mRigs – specific mRig metaNodes only if passed
mRigs_renderMeshes_head_hooked(expected=1, mRigs=[], group='main')

Test to check is all mRigs in the scene have existing renderMeshes hooked up or at least the expected number or above. This wire is used by the PoseSaver to create the PosePointCloud visual mesh reference Failed Results built : HealthObject.results_failed_nodes : [mRig,...]

Parameters:
  • expected – minimum number of expected meshes hooked up
  • mRigs – specific mRig metaNodes only if passed
mRigs_exportSkeletonRoot(expected=None, mRigs=[])

Test to check is all mRigs in the scene have existing exportSkeletonRoot hooked up this wire is used by the PoseSaver to generate the skeletonData block Failed Results built : HealthObject.results_failed_nodes : [mRig,...]

Parameters:
  • expected – NA - unused
  • mRigs – specific mRig metaNodes only if passed
mRigs_mirrorIndexClashes_test(expected=None, mRigs=[])

Test to check for any clashing mirrorIndexes on mRigs Failed Results built : HealthObject.results_failed_nodes : [(mRig, node),...]

Parameters:
  • expected – NA - unused
  • mRigs – specific mRig metaNodes only if passed
mRigs_mirrorIndexMissing_test(expected=None, mRigs=[])

Test to check that all controllers wired to the rigs have mirrorIndexes assigned Failed Results built : HealthObject.results_failed_nodes : [(mRig, node),...]

Parameters:
  • expected – NA - unused
  • mRigs – specific mRig metaNodes only if passed
mRigs_unwired_ctrls_test(expected=None, mRigs=[], searchPattern=[])

Test to check that all nurbsCurves, with optional name matching under the ctrl_main of the mRig are wired to the systems. This is a WARNING only as it’s a very general test but useful for debugging everything Failed Results built : HealthObject.results_failed_nodes : [(mRig, node),...]

Parameters:
  • expected – NA - unused
  • mRigs – specific mRig metaNodes only if passed
  • searchPatetrn – if given this is used in the search to find nurbsCurves with given naming conventions

This is a Warning failure only

mRigs_characterset_test(expected=True, mRigs=[])

check to see if all the mRig controllers attributes that are exposed are wired to a valid characterSet

mRigs_characterset_fix(*args, **kwargs)

fix character set membership for the failed mRig node attrs

mRigs_timecode_attrs_test(expected=True, mRigs=[])

check to see if the mRig has timecode attrs propagated to it

mRigs_timecode_attrs_fix(*args, **kwargs)

Add production TimeCode attrs to CTRL_Main if not found

mRig_pickertemplate_test(expected=True, mRigs=[], templatefile='')

generic MetaRig also supported with this test check to see if the mRig has a valid Pro Picker template

This is a Warning failure only

Parameters:
  • expected – if we were expecting a picker or not
  • mRigs – given mRigs to test, if None then we find all instances of mRigs in the scene
  • templatefile – template file to use iun the fix method if not already found
mRig_pickertemplate_fix(*args, **kwargs)

Add production TimeCode attrs to CTRL_Main if not found

skin_influenced_by_test(expected=None, skins=[])

Test to check for any meshes / skinClusters that have more than influences than the expected This ONLY tests skinClusters attached to mesh objects

Parameters:
  • expected – maximum number of skin weights per vert for all skinClusters found or passed
  • skins – list of specific skins to test if you want to test only specific skins
skin_method_test(expected='dq', skins=[])

Test to check the skinMethod used by all skinClusters matches the expected either dq or linear. This ONLY tests skinClusters attached to mesh objects

Parameters:
  • expected – ‘linear’,’dq’ or ‘blended’ skinning methods, relates to 0=linear, 1=dq, 2=weightBlended
  • skins – list of specific skins to test if you want to test only specific skins
skin_method_fix(*args, **kwargs)

set the skinning method on the failed nodes appropriately

skinned_joints_at_zero_rotate(expected='True', skins=[])

Test to check that all joints that are part of a skinCluster have zero_rotates. This ONLY tests skinClusters attached to mesh objects

Parameters:
  • expected – True - all joints have zero rotates
  • skins – list of specific skins to test if you want to test only specific skins
skeleton_rotate_tolerance(rootJnt=None, tolerance=0.001, searchPattern=[], fk=False)

Test to check that all joints under a given rootJnt have a rotation value under the given tolerance. This is used when binding a rig-up to ensure that we get no additional gimbal introduced via a rig system etc.

Running the fix method will zero the rotate values

Parameters:
  • rootJnt – root joint for the test, if not passed we use the first instance of mRig.skeletonRoot
  • tolerance – maximum allowed rotate in any axis for the joints
  • searchPattern – regex arg passed so we can do simple testing for strings and ignore matches.
  • fk – IF we’re dealing with an mRig then this option switches the rig into FK mode before the test
skeleton_rotate_tolerance_fix(*args, **kwargs)

set all failed joints to zero

skeleton_has_keys(rootJnt=None, searchPattern=[], expected=False)

Test to check that all joints under a given rootJnt have no animation keys set on them. If no rootJnt is passed then we try and inspect the mRig or SrcExt / exportTag nodes

Running the fix method will delete all keys from the skeleton

Parameters:
  • rootJnt – root joint for the test, if not passed we use the first instance of mRig.skeletonRoot
  • searchPattern – regex arg passed so we can do simple testing for strings and ignore matches.
skeleton_has_keys_fix(*args, **kwargs)

delete all found animCurves on the failed joints

skeleton_unique_names_test(rootJnt=None)

check that all joints in a given skeleton have unique short names results_failed={shortname:(longname1, longname2, ...)}

Parameters:rootJnt – root of the skeleton to test if not passed we use the mRig networks
blends_have_targetGeo_test(expected=True, blends=[], meshcheck=True)

test that any blendshapes in the scene have respective original target meshes wired up to the blend node

Parameters:
  • expected – whether the targets should or shouldn’t be in the scene
  • blends – specific blendShape nodes to test against, default=[]
  • meshcheck – only check blends associated to polymeshes, default=True
blends_targetGeo_name_test(expected=True, blends=[], meshcheck=True)

test that any blendshapes in the scene have respective original target meshes that are names as per the target attrs driving them this its crucial for some game engine exporters

Parameters:
  • expected – hard set to True, we wouldn’t run the test unless we expected it to be so.
  • blends – specific blendShape nodes to test against, default=[]
blends_targetGeo_name_fix(HealthObject)

try and rename any failed target meshes back to the name of the target on the blendshape they’re connected too

blends_target_count_test(expected=100)

test that any blendshapes in the scene have less than a given number of blendTargets

mesh_compare_generate(mesh, filepath=None)

safe consistent method to generate the test mesh to use in the compare call

mesh_compare_against_OBJ_test(expected, mesh)

compare a given mesh against an expected OBJ file generated by the mesh_compare_generate func above and passed in as a path. This is useful when building complex facial systems so you can validate that your mesh returns to the actual master Mesh delivered by clients. Sometimes simply turning off the skinCluster envelope is not enough reassurance that the mesh matches.

Parameters:
  • expected – in this case a path to an obj file that we’re going to validate the given mesh against
  • mesh – mesh to compare against in scene

Note

This requires Diffmerge.exe, you can download from https://sourcegear.com/diffmerge/. Once downloaded drop it here Red9/pakcages/diffMerge.exe

texture_path_isValid_test(expected=True)

test if all file nodes have valid texture paths

TODO: return r9File objects so we can automatically do the p4sync if needed??

texture_path_isValid_fix(HealthObject)

if we have an asset control like P4 try and run a file sync

texture_path_under_rootfolder_test(expected='')

test to see if the texture paths for all file nodes are under the correct rootfolder path

Parameters:expected – rootpath to test against
texture_path_under_rootfolder_fix(HealthObject)

repath the textures if they aren’t in a subfolder from the basefile

shape_node_names_test(nodes, expected='True')

Test to check that shape names under transforms are named accordingly

Parameters:
  • expected – True - all joints have zero rotates
  • nodes – list of specific nodes to test if you want to test only specific shapes
shape_node_names_test_fix(*args, **kwargs)

fix shapeNode names based on transform parent names

chSet_arrays()

CharacterSet debugging for placeholder list orders

nodeTypes_present_test(expected=0, nodeType='audio')

test for the presence of a specific nodeType in the scene

Parameters:
  • expected – int=0, number of nodes of this nodeType in the scene that’s acceptable
  • nodeType – Maya nodeTyoe to scan for, default=audio
nodeTypes_present_fix(*args, **kwargs)
<<<<<<< HEAD constraints_interpType(expected=[2, 0])

test all orient and parent constraints and validate their interpType is set as expected. This by default is set to a list [2,0], allowing noflip and shortest to pass and all other interTypes to fail. The first var in the expected list is then used as the default which fails will get set too

======= constraints_interpType(expected=2)

test all orient and parent constraints and validate their interpType is set as expected

>>>>>>> d7ab8a039c4da0838a07bf4a9ec3ad957667b21e <<<<<<< HEAD ======= >>>>>>> d7ab8a039c4da0838a07bf4a9ec3ad957667b21e
Parameters:expected – [2, 0] where 0=’noflip’, 1=’average’, 2=’shortest’, not that is this is given as a list then the first arg is the default that failed interTypes will get set too.
Parameters:expected – 1=’average’, 2=’shortest’,
constraints_interpType_fix(*args, **kwargs)
displayLayers_test(expected=[])

test to see if displayLayers are present and if they should be or not

Parameters:expected – [] list of layers expected to be present, any others would fail the test
displayLayers_fix(*args, **kwargs)

remove all displayLayers that failed the test

reference_nodes_invalid_test(*args)

return a list of reference nodes in the scene that have no file pointer and are therefore invalid

reference_nodes_invalid_fix(*args, **kwargs)

remove any references that come back as invalid

namespaces_found_test(expected=[])

test for the existance of namespaces in the scene

Parameters:expected – [] list of namespaces that are valid
namespaces_found_fix(*args, **kwargs)

remove any references that come back as invalid

unknownPlugins_test(expected=True)

test to find unKnown/dead plug-in nodes in the scene and remove them

unknownPlugins_fix(*args, **kwargs)

fix method : delete all old plugins

turtle_nodes_present_test(expected=False)

test for the presence of BLOODY Turtle Nodes

turtle_nodes_present_fix(HealthObject)
audio_nodes_get()
audio_nodes_delete(*args, **kwargs)
turtle_nodes_get()

Return Turtle nodes in the scene

turtle_nodes_delete(*args, **kwargs)

Remove Turtle nodes from the scene

hik_nodes_valid_get(*args)

return any HIK nodes in the scene