The characterization of an algorithm’s worst-case time complexity is useful because it succinctly captures how its runtime will grow as the input size becomes arbitrarily large. However, for certain algorithms — such as those performing search-based test data generation — a theoretical analysis to determine worst-case time complexity is difficult to generalize and thus not often reported in the literature. This paper introduces a framework that empirically determines an algorithm’s worst-case time complexity by doubling the size of the input and observing the change in runtime. Since the relational database is a centerpiece of modern software and the database’s schema is frequently untested, we apply the doubling technique to the domain of data generation for relational database schemas, a field where worst-case time complexities are often unknown. In addition to demonstrating the feasibility of suggesting the worst-case runtimes of the chosen algorithms and configurations, the results of our study reveal performance trade-offs in testing strategies for relational database schemas.
Kinneer, C., Kapfhammer, G. M., Wright, C. J., & McMinn, P. (2015). Automatically evaluating the efficiency of search-based test data generation for relational database schemas. In Proceedings of the 27th International Conference on Software Engineering and Knowledge Engineering.
Want to cite this paper? Look in the BiBTeX file of gkapfham/research-bibliography for the key "Kinneer2015".