Everything on the web is geared towards instant gratification and search is no exception. It’s all about right answers in an instant, and to that end technology has been all about aspects it can influence like performance (let’s take that as a given) and predicting what we’re after at any given time. Take for example the suggestions on Amazon when you come to the home page, or search as you type in Facebook suggesting profiles and groups that not only match what you’re typing but influence it by your location, friends and interests.
When it comes to lending context to data, mapping and location-awareness is often the missing ingredient. Whether it’s showing prevalence and aggregate as heatmaps, or individual pin-point locations, we have the tools for you.
Heatmaps are typically used to convey prevalence: consider situations in which there are numerous data points in your data and you’re interested in seeing the larger geographic trends: average unemployment, voters above a certain age, or criminal activities, for example. Heatmaps make the areas of highest density: geographical hotspots, very clear.
When accuracy is key, drop pins. Because each pin on your map corresponds directly to a point within your data, you can easily present specifics: individual job centers, voting stations, or crime scene locations. And although pins are less adept at conveying trends, the presence of a map underneath does maintain the spatial relationship between individual points.
In both cases above Twigkit automatically adapts to the data that you provide. Simply drop in a few lines of markup, choose a provider like Google, Bing or ArcGis, and the platform takes care of everything else.
<geo:map response="from_solr" provider="google" latitude-field="geo_lat" longitude-field="geo_lon" events="automatic"> </geo:map>
And this is not a one way street, you can use our sophisticated mapping components to pick or filter the data, or link directly through to detail pages or secondary searches from pins.
Note below how moving the map will filter the results you get back (automatically or on click):
Give us a call if you’d like us to help you bring order to your data.
We’re very proud of our technology. It’s flexible and modular, and allows organisations to create search applications that span disparate, nebulous data sources in order to extract structured, collated findings from amidst the chaos.
It’s this flexibility, plus the fact that our applications are beautiful and easy-to-use, that makes us a great fit for pretty much all industries and sectors. And it means that we're often prototyping and working on new solutions for people trying to solve some pretty tough, important problems. One of which is crime.
We've just completed a short, 10 day sprint to prototype a brand new criminal intelligence and analytics application for a US based law enforcement agency. Together we wanted to see how search could assist with criminal profiling, identification and trend spotting.
The prototype is based on Twigkit's SIA (Secure Intelligence and Analytics) stack, and uses Open Source data supplied by the Dallas Police Department. Here's how we got on.
Our prototype brings together data from many different locations for the first time, but essentially consists of information about people (victims, witnesses, police officers, reporters, suspects and perpetrators), places (incident locations, addresses of individuals) and incidents .
We wanted to go a little deeper than a traditional, text and facet led search application. Such applications are valuable, but can lack the contextualising layer of insight which is so critical to effective law enforcement. We wanted our tool to seek out and present that context to officers - both in the field and in the office (so our prototype needed to work perfectly on mobile as well as desktop).
The prototype that we built over the course of the sprint is exactly that: a prototype. It isn’t fully featured or deployment ready (we didn’t quite get around to delivering the big red ‘Whodunnit’ button), but we hope that by showing how far we got you’ll see how quickly Twigkit applications can come together, and how our technology can help make a positive difference to something as important as crime prevention.
Please note: We take data protection seriously. All information shown in screen grabs has been anonymised. The data used for the purpose of the prototype is open source, and taken from www.dallasopendata.com.
How can we help you today, Officer?
Visitors to the SIA application must first log in. Then, they’re greeted by the Dashboard.
Dashboards are a great way of providing a visual, 10,000ft overview of what’s going on across the breadth of an application. We tried to put as much value into the dashboard as possible by making every chart, element and statistic fully interactive.
An interactive dashboard means operators can do more than just observe. They can drill down and combine topics to see contextual, precise statistics. And as the whole page dynamically updates as topics are selected, it’s like seeing a complete, fresh report coming together in front of your eyes (it’s actually fascinating to delve into crime data in this way - do let us know if you'd like a demo).
By including geographic areas and ZIP codes in the dashboard, other trends can be monitored and considered. For example, an operator might spot that a certain neighbourhood in Southeast Dallas experiences a weekly upswing in assaults on Sundays. This insight can be monitored and used in planning future allocations and deployments.
In the screenshot above we can see that the highest number of assaults in Southeast Dallas take place within ZIP code 75217. Let’s now look at this neighbourhood in more detail.
Exploring a Neighbourhood
The ZIP code detail page brings a number of different elements together. An embedded map shows all recent incidents in the area. In the main body of the page these same incidents are listed in more detail, complete with who, where and what details.
Also on this page we're able to generate statistical information to reveal more general insight about the local area, including overall crime priorities, trends, times of high and low incident activity, and a list of ‘Local Faces’ - those individuals who have historically interacted most often with the DPD in this neighbourhood (further sub categorised by their role within incidents: as victims, witnesses or arrestees).
Police Officers associated with the highest number of incidents in the area are also shown - who better to approach for local advice than the officers on the beat?
All of the information on this detail page is generated entirely by search, meaning that it requires no maintenance and is always up to date.
Searching and Analysing Incidents
The dashboard offers a streamlined way of exploring topics and jumping into a results set, but a more traditional search interface is essential in order to give people the complete set of tools and filtering options that they need to locate and select specific incidents.
Incident results are presented as cards, with key information (id, incident type, location and associated individual) presented immediately to the visitor. We could have added a lot more elements to each card, but there’s a balance to be struck: too little information can cause uncertainty and force users to interrupt their search by clicking through to detail pages; and too much information makes it harder to scan and compare items in the list by glancing over them.
In addition to lists of categorised filters, the SIA application offers further tools to help refine results. Dates are presented as histograms, enabling rages to be selected. A zoomable heatmap allows geography to be considered, and pie charts inform about proportional incident volumes by crime type or division. All of these tools and interface elements are provided by Twigkit as out of the box components.
The fields on each result card offer further filtering options. Individuals and ZIP codes can be added as filters to the overall query, alongside direct links to their respective detail pages.
Incident in detail
As you might expect, Incident detail pages bring together all of the details, people and places that relate to a specific incident. But by using search it’s also possible to identify and highlight to the operator other, potentially related incidents. We think this is quite exciting.
In the prototype we're using the search platform to try and identify potentially related incidents. This can be done using a wide range of criteria such as using key terms from the officer notes, type of incidents, proximity, time and people involved. No, it's not quite a ‘Whodunnit’ button, but it highlights the importance of obtaining relevant peripheral information through search.
Alongside this list of potentially linked incidents are overall statistics for the area (to give additional context to the operator), and a notes function so that insight and information can be discussed and shared with others.
Let’s now jump from the incident detail page to that of an individual.
Personal Records and Interactions
An individual might be linked to different incidents in different capacities. The person detail view aims to present all the facts available about someone in one place, in a clear and logical format.
The data brought together on this page includes in this case basic information about them (their name, address, age, family and profession) alongside more detailed information about incident involvement (as a statistical overview, and a breakdown: as victim, witness, reporter, suspect, perpetrator or a combination of the above), a map displaying all known addresses, and a breakdown of the types of offence that this person has witnessed, reported or committed.
On the roadmap for this view is a full chronological history of their interactions with the Police presented as a timeline, and integration with social media feeds to provide supporting contextual information.
Looking for People
Much like the incident search page, the Person search page offers operatives a range of tools to find, refine and pinpoint an individual or set of individuals.
By invisibly embracing different data sets together on this page, we enable operators to mix geographical and personal data queries together (“Show me all people with chest tattoos in Northwest Dallas”, for example).
To prove the point, we are adding additional data sources and visualisation features here, including the ability to automatically compare known individuals with unknown suspects based on distinguishing features, and more tactile, refined tools for choosing things like identifying marks or other physical attributes to make it quicker and easier for arresting officers to use the platform.
Over the course of only ten days we managed to put together a compelling case for search enabled offender profiling and crime analytics. That's 10 days end to end; from loading the data to deploying a ready to use application.
Our prototype combines high level overviews with dedicated detail views that pull together and present relevant, peripheral behaviours and trends including notable locations, commonly seen local characters, and the ability to spot and highlight potentially related incidents.
Perhaps most importantly, it allows our operators to interact with, adjust and journey across the data in an open way that supports their needs.
We believe that by making data simpler and easier to interact with, it can uncover new insight. Search is capable of some incredible things, and when used in the right way it can make a real difference to people's lives.
If you have questions or comments about this application, or have a project of your own in mind we would love to talk to you about it. Please don’t hesitate to drop us a line today, or to give us a call on +44 (0)1223 653 163 (UK), or (408) 678 – 0400 (North America).
Search has become second nature to most of us. We use it everyday to find things to buy, music to listen to, friends in faraway places or the optimum way to boil an egg. But nowadays there’s more to search than keywords and ten blue links.
Take this example from Amazon.com. In this case thousands of items matched my search for 'Roald Dahl', and it is obviously not feasible for me to page through all of them to find the book I’m after. But the search engine is clever, it can give me insight into what types of products they have. By showing suggested filters on the left it can enter into a dialogue with me, effectively asking whether I’m after hard-copies or digital, in English or French etc.
When the list of options is limited, as with the format, it’s pretty straight forward. A quick glance will let the user determine a way forward. But if you consider the author list things get a bit trickier. Then consider the use case in far more complex enterprise applications such as data lakes, patent research or business intelligence and the problem is compounded.
In this case you really need a second step if the user’s intent was to find all books by a certain author. They could obviously have added this missing nugget to their query, but they may not have thought of that, or simply expected a different result. In this case the solution might be to allow the user to search within the suggested filters to pinpoint exactly the ones they would like to apply; a pretty standard, but as yet very underserved requirement.
Here’s how we do it with one of our standard components for larger facets:
As you can see in the animation above, we can search within the filters to find all hospitals within the ‘Organisations' category - quickly narrowing down to exactly the ones we are interested in. And by the way, it's not a naïve search of the filters that were returned (think spotty yellow box :) but a true, accurate (deep) representation of all possible values. Neat isn't it?
Hit us up if this is a problem in your big data or search applications!
November 15-17, 2016. JW Marriott, Washington DC.
November 1-2, 2016. Cheshire, UK.
October 11-14, 2016. Boston, MA.
A really successful search and discovery application does more than just let people search for information. It helps it's audience to answer the questions "what am I looking at? What does this all mean?"
Good applications communicate, and getting that right is a design problem. It's why we take the user experience of our applications so seriously.
After all, data can be really intimidating. It can be incomplete, and feel messy, complicated and confusing. Through our applications, our job is to interpret information, pull it apart and then reconfigure it for our clients and their customers in a way that meets their needs, regardless of their situation: on a huge control room screen, on their desktop workstation, or on their smaller screened tablet or mobile. We must consider where and how information is presented on each page, and we must control how that data behaves in different circumstances.
With this in mind we thought we'd look at one particular element of our application toolkit: data tables, and specifically how they appear on smaller screens. What follows are some of the out of the box options we offer, so that as a Twigkit powered application comes together, helping different people to access and understand what they're looking at becomes a human rather than technical problem to be solved.
Data tables are very often amongst the widest elements on a page, and can only shrink so much before things start to become over-crowded, or fall apart entirely.
The data comparison table above presents data cleanly enough on large screens, but it's structural integrity goes out of the window as soon as the screen width narrows. Categories are eventually lost entirely, forcing our audience to scroll back and forth.
So what can we do?
What follows are a few different Twigkit approaches to this problem. The 'right' option is all about context. It will depend on an understanding of the audience and the job that they're trying to do, and the role and characteristics of the content itself.
By creating a 'stuck' left hand column users can scroll or swipe horizontally to browse comparison categories without having to dart backwards and forwards to check which company it is that they're looking at.
By slightly reconfiguring our table (essentially flipping our x and y axis), the same 'stuck' left hand column now gives the table a different focus: we can now browse by specific characteristic rather than per company.
A different approach is to break the table into individual cards on small screens.
This is a neat solution which allows us to view the data more like traditional search results, and one that keeps subjects together with their data very neatly. The downside? It's much harder to cross reference and compare different comparison categories.
Did we say it was harder to cross reference and compare different comparison categories? It doesn't have to be!
And if circumstances demand it, we can even take a pinch of one and a drop of another, and create something that combines both.
We hope that this has given you a window into our thinking as well as our technology. And we should be clear - we think about big screens as well! More about this in a coming blog.
As more and more organisations start to embrace cloud-based solutions for their data- and information management needs, many are coming to the realisation that they still need to store a substantial amount of data closer to home. On premise may feel a little old fashioned next to cloud offerings, but for numerous practical, legal and security reasons it remains relevant, and unlikely to vanish any time soon.
Which means that many organisations will still find themselves in a position of working around large amounts of disconnected distributed systems, with no single point to access it all. Other organisations with data held in different search platforms, or those whose data is in different geographic locations have the same problem: distributed infrastructures make it difficult to paint a complete picture of what’s happening inside a business, with limited possibilities for moving from a top level perspective into individuals areas and investigate or observe how everything fits together.
And this ‘distributed’ problem can stem from a combination of completely valid realities; different departments select and adopt different technologies based on their own needs. Privacy and export laws can impose strict geographical controls on data. Mergers and acquisitions bring together disparate groups of people, each with their own organisational and data storage arrangements. And finally, there are a number of other drivers to adopt these new offerings, the challenge is just how to make it cost-effective and practical for you.
So what is the best, most pragmatic way to deal with these issues? How should organisations start to think about connecting their data together? Moreover, how can they ensure that their people are able to access and understand relevant, timely information with a minimum of disruption and maximum engagement? All whilst ensuring compliance with internal and legal security policies that protect the integrity of individuals, data, and the business itself.
And for end users this means the ability to see more: more relevant information, more trends and more context, and pinpoint detail when they need it. Federated search delivers a reality that is greater than the sum of it's parts.
Imagine combining internal marketing information held in SharePoint with individual customer transactional data from a database; or combining the results from two or more distinct SharePoint servers in different countries.
In terms of implementing Twigkit federated search, data sources can be added, removed or swapped out of the application incredibly quickly and easily: it’s as simple as adding a single line of code.
Of course, being able to bring business data together in this way is not the only benchmark for a truly successful single-pane-of-glass application. The security constraints of the application must remain enforced and in place from the outset. Moreover, the data within the application must itself be presented and structured in a way that allows and encourages people to find, explore and manipulate it, thanks to an interface that is clear, meaningful and device agnostic.
With Twigkit acting as the single point of access for all data sources, our rule engines and integration with Single-Sign-On (SSO) providers lets organisations stay on top of their security requirements. The user experience isn’t disrupted with multiple logins and passwords, irrespective of which of the dozens of security models is used, instead Twigkit provides a single point of secure access to data.
In the latest release of Twigkit we’ve simplified two of our key security integrations. For Kerberos we’ve removed the need for servlet container plugins and fronting the Java application with IIS. This greatly speeds up the configuration and setup time. For OAuth 2.0 we’ve pre-packaged Google, Facebook, and Office 365 implementations for OAuth. So now our customers can take full advantage of the many pre-packaged SSO services.
The interface of your application plays a huge role in delivering a great overall user experience. Twigkit makes it possible to create beautiful, tactile and clear application user interfaces (UI) quickly and easily, from a library of provided UI components. Components exist for all aspects of the application, from search results to reports and interactive visualisations, and each comes fully browser tested and guaranteed to work on any device or screen.
Bringing it all together
By using pre-built modules and components and neatly drawing a line between the application itself and all underlying data sources, the result is an application that offers true flexibility for the future, safeguarding the initial investment.
Technical Case Study:
SharePoint Federated Secure Search
With the advent of Office 365 (O365) many enterprise customers are taking the opportunity to move their SharePoint repositories into the cloud. But as we’ve seen, with even moderately complex security requirements this often means that some data remains in on premise SharePoint servers, while the rest is migrated to an online SharePoint environment using O365.
Twigkit can help here, by laying a secure, federated search over both their SharePoint on premise and cloud repositories. How? Through our out-of-the-box integration with Microsoft Active Directory Federation Services’ (ADFS) the Security Token Service (STS) and OAuth authentication protocol, Twigkit can perform a single search against both SharePoint and O365 at the same time.
Security Token Service
Twigkit applications authenticate using the Security Token Service to securely retrieve data from either SharePoint or Office 365 and ensuring the individual only sees data they have permission for. The same approach can be used for Office 365.
SharePoint Online natively supports OAuth and authenticating using OAuth is simpler than using the STS. Arguably OAuth is the right approach when using O365 for Single Sign On, however, it lacks the capability of the STS methodology to federate security seamlessly with SharePoint on premise.
In both STS and OAuth methods Twigkit manages the tokens sent to the data platform which is how the repository determines what data the user is allowed to retrieve.
More than simply getting a set of results from both platforms, performing federation with Twigkit combines all aspects of the data including facets, rescoring documents to optimise relevance and scoring, and more, to deliver a compelling user experience across silos. What we get is a single, feature-rich view of the data, unhampered by distributed architectures at the platform level.
Being a true, federated single-sign-on solution, the user is not challenged for credentials (assuming they have an active session). Queries are secured using our built-in security capabilities, again without custom code or convoluted configuration.
Twigkit provides real solutions to the many challenges of working with distributed data. Our acceptance of any and all data sources, our deep integration with multiple security providers, and our powerful UI component library all come together to create tactile, flexible solutions which connect people with the right information, on any device, irrespective of whether the underlying data is held on enterprise sites or in the cloud.
We believe that by making data easier to interact with, it can uncover new insights and make a real difference to people’s lives.
Twigkit is a software company with offices in San Jose, London and Cambridge. Over the past 7 years our technology has changed the way forward thinking global organizations access and make sense of their data.
Fortune 500 companies trust us with their search and discovery needs, alongside governments, military, manufacturers, media, retailers, charities, financial services, and more.
By solving complex problems with simple building blocks, marrying great defaults with fine grained control, and abstracting retrieval from any data provider, we enable custom search and discovery applications in a fraction of the time of bespoke development, and with demonstrably better results.
If you have a project of your own in mind please don’t hesitate to get in touch with us at email@example.com
Google have announced the discontinuation of the Google Search Appliance. So what happens next?
With the dust settling on Google's announcement that it is to discontinue and wind down sales of its on-premise search appliance offering, it would appear that Enterprise Search is, once again, at a crossroads. And as Google now officially doubles down on its effort to provide a cloud-based solution, albeit with both features and release dates unannounced, we look at what their announcement means for search, and for GSA customers in particular.
It should be said that this isn’t the first time that we’ve seen a large software vendor deprecate a product. A few years back Microsoft pulled support for FAST ESP in favour of search built into Sharepoint. Having happened before, it will likely happen again.
But if you’re a GSA customer, the news may have been disquieting. How can you ensure continuity of your search solution beyond your term with Google Search for Work? Which platform should you consider switching to, and what challenges are you likely to face when you do? Valid questions, and as you might expect, there isn’t a one-size-fits-all answer. Your next best step is going to depend largely on the needs and specifics of your environment, the characteristics and landscape of your data, and the skills you have available.
What we can say is that if you’re a Twigkit GSA customer (directly, or through one of our partners), then there is good news: switching will not negatively impact your search application. Architected and built from the ground up with this kind of eventuality in mind, our technology abstracts and separates your search application from any and all underlying data providers. So if you need to swap your search engine for another, you can do so quickly, simply, and without affecting your overall investment in the application.
Search is not a commodity
One of our customers switched from FAST ESP to an open source provider, Elastic, after careful consideration. Their data was structured, editorially managed and (although paywalled) generally accessible. This leads us to a key question a lot of our customers are asking themselves: the viability of open source search engines in the enterprise.
Here’s what we think: as it stands today open source search has closed the gap on many of the commercial vendors, but there isn’t yet a truly viable open source contender when it comes to an end to end, out-of-the-box solution in the murky world of unstructured enterprise content.
Reading obscure binary formats, connectors for ever-changing, ageing repositories and convoluted access controls. These are all battles that have raged for decades, and are smaller parts of a war that, to date, the open source community has been hesitant to join (we say this out of love: all of us at Twigkit are fierce proponents, advocates and committers in the community).
The bottom line is that unless your content is already structured and accessible; open source is going to present you with some challenges along the way.
Structured vs Unstructured Data
Nothing determines the quality and accessibility of data like structure. So before you start to price up your commercial options, take a good look at your data. The more structure that surrounds your assets, the easier it will be to make it searchable using open source alternatives like Solr and Elastic. The same principle applies to usability: more structure helps your search engine to deliver the results your end users are trying to find.
Structure is good
If your data is highly structured (either because it’s sitting inside a database or is in a structured format like JSON, CSV or XML) it tends to be very easy to index it in something like Solr or Elastic. Both you will find to be fast, capable and highly scalable, and Solr even has tools to help ingest this sort of data easily. If this is your world, and you haven’t used these platforms before, be prepared to be amazed at the capabilities they offer. Lightning fast, accurate facets with rich support for complex aggregation and statistical analysis on the fly.
At Twigkit we use Solr and Elastic for business intelligence type applications on billions of records; for just a fraction of the cost they offer matching capabilities that one might expect to find only in specialised commercial solutions.
A Messy Room
If the data landscape of your organisation is heavily siloed and involves content in many different formats spread across many different locations (file shares, Documentum, Sharepoint, Lotus Notes and others), you will almost certainly need proprietary connectors to accurately extract your content and correctly restrict access (more on that later).
In our experience these established, monolithic software solutions rarely remain static for long. Between versions many things affecting the structure can change, making properly extraction of your content something of a moving target.
And of course the more solutions and versions of each solution you have, the more challenges you’ll face. Our advice would be to take a good look at the vendors (either connector or full stack search engines) who most closely might fit your needs.
Ultimately the structure and accessibility of your data should strongly steer your final decision. If you need informed, impartial advice on the subject, we and our partners are very happy to help.
“Secure search” can mean a number of different things, but an important consideration for anyone evaluating a search platform is whether they will require support for security at the document level.
Document-level security controls govern access to every document in your search application on an individual, per-document basis. Access to each document is granted or rejected based on permissions which are generally set at either user or group level. Permissions are derived from privileges assigned in the file system or a content management system, usually in conjunction with something like Active Directory. This matters because these privileges need to be appropriated and stored alongside the documents themselves at the time they’re indexed (allowing access controls to be enforced at query time).
Commonly known as security trimming or early binding security , this is really the only scalable/reliable/usable way of making sure your end users don’t end up seeing data that they shouldn’t.
If document-level security is something that your organisation needs, we strongly suggest that you commit to a commercial vendor that offers this capability out of the box.
Cloud or On-Premise
Finally, to cloud or not to cloud. Google have announced their plans although details are scant at the moment. Other major players are already offering hosted versions of either their own proprietary search engines, and managed versions of Elastic and Solr (or Solr-based).
We have experience with these services, and feel that they really shine in their ability to automatically scale for content volume and query load. For publishers with a large amount of public content and/or simple security models, this is an attractive option. For more sensitive industries the jury’s out. You may feel more comfortable knowing that your information is housed somewhere you can see it.
Spoilt for choice
There are strengths and weaknesses to all the big and upcoming players in the space. HP IDOL is the veteran heavyweight in enterprise search. With Solr at its core, Lucidworks Fusion offers a library of connectors and document level security trimming. Niche player Attivio offers highly capable technology that offers all of the above whilst closing the gap between database and search, worlds which use to remain firmly apart. Similarly, NoSQL vendors like Marklogic have started to move towards the market themselves with built-in search and discovery capabilities of their own.
These platforms, like the many others available, all have their particular costs, strengths and weaknesses. Familiarise yourself with the capabilities and support that each one offers, and map that against your budget, enterprise preferences, and the nuances of the problems your organisation is trying to solve.
Whether you’re building a search solution or surfacing analytical information from your data lake, your choice of stack is important. Your application represents a significant investment (especially if you want to get right), and whether built from scratch or tightly coupled with a vendor, there will be some challenges along the way.
One thing is certain: in a world where changes in vendor policy and the continued rise of better, more capable technologies have the potential to cause disruption, your investment needs to be secure. It’s always been best practice to separate your application from your underlying data source as much as possible - and that remains true today.
This sort of portability means flexibility, giving you power to leverage best of breed (read: most suitable for you) at any time, and safeguard the investment in your business applications against factors outside of your control.
One thing to remember: it’s tempting to think of the application layer and the user experience as being the icing on the cake, but don’t be fooled: it is the cake . Your application is the way that people access, seek and interact with your data, so make sure that it’s well thought out, and planned and budgeted for appropriately.
If you have questions or comments, or have a project of your own in mind we would love to talk to you about it. Please don’t hesitate to drop us a line, or to give us a call on +44 (0)1223 653 163 (UK), or (408) 678 – 0400 (North America).
Big Data pioneer DotModus is one of Twigkit’s most valued and longest standing partners. Based in South Africa, their team have years of experience in creating bespoke Big Data applications that deliver rich insight for global businesses and governmental customers.
DotModus specialises in the minutiae of data. They know exactly which stones to turn over in order to uncover the richest, most valuable insights. They quickly gauge how to aggregate, connect and interpret that information; and they appreciate the importance — and the challenge — of presenting the resulting insight to end users in the right way.
Of all this, it’s the user interface that DotModus and its customers have historically found to be the most challenging aspect to get right. It’s not that they lack vision: quite the opposite, but as with so many large scale technology projects design time and resources tend to be eaten up early on — leaving the final, crucial finessing to the mercy of project deadlines and budgets.
And so it was Twigkit’s box of tricks, a fully featured library of UI components and enterprise strength capabilities, that led DotModus to Twigkit.
DotModus realised that our dedication to the human aspects of Big Data - the interface elements that help to shape the overall the user experience, is the final piece of the Big Data puzzle. By supporting DotModus’ considerable capabilities on the data side, Twigkit enable them to realise their ambitions of intuitive, mobile ready applications for Big Data, created within a workable timeframe.
What successful Big Data applications look like
Single box Big Data solutions do exist, but they're not without drawbacks. They can be unintuitive and unwieldy to use and, more importantly, lack dedicated focus. Here’s the point: the architecture and form of a good Big Data solution must vary between different verticals, clients and projects. A bespoke solution doesn’t have to cost more if it’s built in a smarter way.
Building smarter is key. Prototype driven development coupled with a toolset that makes getting applications up and running with real data quickly, is very important. Twigkit tools do exactly that — out of the box we’re able to create many of the tent-pole features of Big Data: things like interactive dashboards, interactive visualisations, geospatial heat maps, data enrichment through user feedback, dynamically generated topic pages, reporting, and of course powerful search and discovery capabilities.
We take care of the user experience so that our partners at DotModus can be free to focus on what they do best — architecting and delivering cutting edge solutions that deliver the right sort of insight where it’s most needed.
The Power to Predict
What kind of insight? DotModus recently developed three Big Data applications (spanning Media, Government and Telecommunications) that — on the surface — seem unrelated. These are bespoke systems after all: optimised around different organisational requirements, structures and audiences. But what links them all is search: they harness the power of search to analyse past and real-time human behaviour in order to make predictions about the future.
For a Media company, this might involve tracking individual consumers across multiple media channels to predict probable future purchases, which in turn enables them to push relevant product suggestions and promotions their way.
For a Government department, the power of search can help to keep an ear to the ground at all times: by creating an early warning system that monitors news and social media chatter, and generates alerts should suspicious activity arise.
And for a Telecommunications provider, bringing together call logs and transactional behaviour can reveal spending trends on a per-customer basis, allowing them to predict when and where valued customers are next likely to need a credit top up, or require support.
Each of these Big Data solutions was created by DotModus in a more efficient, agile way than traditional software development. By selecting, configuring and expanding upon Twigkit stock components the journey from project launch to final implementation took days rather than months. Data surfaced by these applications is enhanced thanks to additional capabilities such as user feedback capture, on-the-fly enrichment, and the ability to store complex searches that automatically monitor data sources for new insights.
In essence: faster off the blocks, beautiful to interact with, and genuinely fit for purpose.
Insight from a billion documents,
understood in the blink of an eye
These applications have more going for them than purely good looks. They perform some heavy lifting on the data front, simultaneously handling as many as a billion structured and unstructured documents coming from numerous independent sources.
Processing live data is no easy task, and being able to draw real-time insight and trends from media feeds, news sources and social media requires some very clever thinking indeed. Armed with an ingenious array of technologies, DotModus have managed to develop solutions that prioritise performance without compromising accuracy or features.
The Importance of User Experience
But scale and features aside, what’s really impressive is the way that all of these solutions expose and present the trends , relationships and aggregated sentiment that lies below the surface of data in an elegant, engaging way.
We think that data (big or otherwise) must be presented in a way that is understandable, navigable and welcoming to users, and useful to their business.
User experience is too important a consideration to be left to the end of a project. Scope creep need not occur simply because the user experience was left unfinished, or a last minute consideration. With so much hard work done on the data side, it’s important to remember that without a positive, coherent user experience, users simply will not adopt a solution.
Partnering with Twigkit
We really value our relationship with our growing network of partners like DotModus, and as you've seen, we’re regularly blown away by what they achieve with our technology.
If you have questions or comments about any Twigkit powered application please don’t hesitate to drop us a line today, or to give us a call on +44 (0)1223 653 163 (UK), or (408) 678 – 0400 (North America). For more information about becoming a Twigkit partner, please visit our partner page or drop us a line.
Hey, I'm Kieran,
At Twigkit I'll be working as a Front End Developer, this means that I get to create new and exciting bits and pieces for the user interfaces as well as improving all the great tools Twigkit already has for creating beautiful applications.
After finishing my degree in Computer Science specialising in Networks I was given the chance to work as a developer for North Hertfordshire College developing Web Applications for staff and students. I then started working for the Construction Group Willmott Dixon developing Angular JS and C# Web Applications.
Outside of work I'm an American TV Series addict, a keen runner and love to travel having visited Stockholm, Seoul, Beijing and Barcelona to name a few.
Front End Developer
We're delighted to welcome Krijn to the Milpitas team.
Technical Sales Manager
Hey there, I'm Krijn.
I'm joining Twigkit Milpitas as a Technical Sales Mananager, doing a mix of pre-sales work, account management, and partner enablement. Basically, showing the world the amazing things you can do with Twigkit.
After getting my Master's in Computer Science from the Vrije Universiteit Amsterdam in nineteen ninety-mumble, I've been around the IT block a few times. I had some adventures in telecom and web content management before focusing on enterprise search technologies. In that field, I've worked at FAST Search & Transfer, Attivio, and Elastic, among others, and also did a spot of freelancing.
When I'm not working I love the theatre, reading science fiction, exploring the many culinary delights of the Bay Area, and tinkering with stuff (for example, Xcode, Pebble, and Lego).
Oh, and I'm originally from The Netherlands, but have lived in the US for 6 years now.
Twigkit is all about the team, and our team continues to grow! Introducing Tim Sanders, Principal Consultant.
Hi, I’m Tim,
At Twigkit I’ll be working as an enabler, engaging our clients and partners with the information and data they need to develop Twigkit solutions. Additionally, I will be working with technical documentation, training and best practices.
I've worked in the IT sector for over 15 years specializing in technical training, communication and enablement. I am originally from Wisconsin, US, and received my Bachelor's degree from Saint Cloud University in Business Computer Information Systems. Following graduation, I started my international life, living in Okinawa, Japan and then on to the UK where I gained a Master's degree at the University of York in Information Technology. I have been in the UK 8 years and am happy to call it my home.
In my free time, I enjoy cooking (actually eating), skiing, photography and trying to get everything in my house to wirelessly connect to one another. Additionally, I like to travel and 練習日本語.
Being a Wisconsinite, if you like cheese, we’ll get along just fine.
Work at Amgen
One of our fantastic customers is expanding it's search team. Amgen is looking for a talented developer to work with Twigkit and a variety of search technologies.
Twigkit and Amgen have worked together on some really exciting search projects over the past couple of years and although we can’t divulge the juicy details here, Amgen have big plans to really push the boundaries of search across their organisation. You'd be joining a tight, friendly team doing really exciting things with search.
This would be a full-time position working for Amgen, who are offering competitive pay and benefits along with full relocation and H1B visa sponsorship.
We're very happy to announce Akshata Vaidya has joined Twigkit Milpitas as Senior Software Engineer.
Senior Software Engineer
Hello, I'm Akshata.
At Twigkit I will be working out of our Milpitas office on both the core development of our platform, and helping organizations to develop beautiful, scalable applications using our framework. In addition, I will work closely with our partners in enabling continued support and development of our applications.
Previously, I worked at State Farm Insurance developing applications used by their Actuarial Department. I am originally from Bloomington IL and completed my Bachelor’s in Actuarial Science (with a minor in Information Systems) and Master’s in Information Systems from Illinois State University.
I moved to the Bay Area recently and I’m absolutely loving the warm weather, amazing restaurants and the gorgeous hikes! Apart from spending time with family and friends I enjoy exploring new places, catching up on a good book, playing table tennis and hiking.
It's with great pleasure that we welcome Scott Brown to the Twigkit team.
Senior Software Engineer
Hi, I’m Scott.
At Twigkit I’ll be working on core development of the software stack. That means expanding our existing products and working on new features in accordance with current strategy and demand.
Before Twigkit I worked at MathWorks as a software quality engineer, and before that at the University of Cambridge as a research associate. For more information, take a look at the European Space Agency mission Gaia!
I’m originally from Brighton but studied in Durham (Physics) and UCL (Spacecraft engineering), before undertaking a PhD at the Institute of Astronomy in Cambridge.
In my spare time I love to travel and in general stay active, whether it’s exploring some new place, hiking over some mountain, or surfing on some beach. Recent trips have included Copenhagen and Lisbon where you should try the amazing pastéis de nata.
I enjoy cycling and tennis as well popping into local coffee houses first thing on a weekend. If you’re planning on going to Hot Numbers or the Orchard in Grantchester then there’s a good chance I‘ll see you there!
Interested in joining our team?
We have vacancies in both Cambridge and Milpitas offices.
Twigkit welcomes Neil Garner as VP Sales, Global.
Twigkit is delighted to welcome Adrien Flammarion as our new Director of Advanced Services.
Press Release: strategic partnership between Wabion and Twigkit in the Enterprise search sector.