As we start to engage new NexJ CDM customers, we consider a foundational aspect for successful delivery: a sound implementation methodology. Now we all have plenty of toolkits to draw on, so it is very unlikely and exceptionally rare to start from scratch but new tools with innovative approaches demand extra learning activities to ensure adoption. To kick-start these processes, we begin with a life-cycle wheel depicted below. The touch points will vary across deliveries, vary by organization and by person; each area can be expanded into further and deeper delivery or training cycles.
- Implementation likely starts with Defining result attributes. Now this may sound a little intimidating at first but really it is as simple as assigning column headers to a spreadsheet: understand what needs to be measured and how it fits into the evaluated process. Once names are cataloged, some technical folks are then needed to map those attributes back to source systems already present in the organization. CDM will take care of the rest.
- Share the results as soon as possible even when not all attributes are mapped or fully mapped. More discoveries occur when visualizing or interacting with data and the sooner these discoveries occur the better. Results can easily be shared just by passing a URL (the address line in your web browser) and analyzed with your reporting tool of choice. The technology campus may also be interested in sharing this content with another application, kind of like a feed. This too is possible. They might be interested in the built-in change control that ensures existing feeds continue to operate even though changes to the result set have been made.
- Enrich the result sets with discoveries made to date. Modify or add additional attributes. Create computation columns. They are like little formula cells in a spreadsheet that compute for you. Get technical help for the complex stuff when required, leverage data from inside or outside the organization. CDM will perform the calculation for you. Share the results and make more discoveries.
- Likely the least surprising discovery made is that not all source systems are perfect and neither is the data they provide. Sometimes a little error at source is amplified and can become quite problematic and you may need to Govern result data at source. This is neither scary nor complicated nor overly bureaucratic but simply amounts to choosing the corrective action that suits your needs and abides by your policies. This can range from correcting the system at source to living with the intermediate result and using a computed column. Whatever selection you make today can readily be changed tomorrow so time to market and error costs should guide your decision making.
- Deeper Analytics often require and demand historical content and many models depend on time serialization. This means your results need to consider a date portion describing when your results occurred and track changes accordingly. Traditionally this requires technical resources to design, construct and maintain and can be a costly undertaking. CDM provides this capability natively: changes are tracked at the source system by the user and timestamped. Result sets can also be natively journaled to a big data store. Either way, time serialized, historically accurate data is aligned with your analytical model for seamless adaptive learning.
- Collaborate by sharing result sets, findings, discoveries and analytics between team members. Each attribute is centrally defined and can be accessed by any number of result sets. The work of mapping an attribute or defining a calculation only needs to be performed once. Similarly for the governance or analytics steps as well. Gain deeper understandings into what works well and apply it to other teams or team members.
What changes would you make to your methodology? How much labor, resources or time could you save in your implementation? We welcome your thoughts, value your insights and action your feedback: share below!