I’ve been running a lot lately, and so have been listening to lots of podcasts! Which is how I stumbled upon this great episode of the Lullabot podcast recently — embarrassingly one from over a year ago: “Talking Performance with Pantheon’s David Strauss and Josh Koenig”, with David and Josh from Pantheon and Nate Lampton from Lullabot.
(Also, I’ve been meaning to blog more, including simple responses to other blog posts!)Interesting remarks about BigPipe
Around 49:00, they start talking about BigPipe. David made these observations around 50:22:
I have some mixed views on exactly whether that’s the perfect approach going forward, in the sense that it relies on PHP pumping cached data through its own system which basically requires handling a whole bunch of strings to send them out, as well as that it seems to be optimized around this sort of HTTP 1.1 behavior. Which, to compare against HTTP 2, there’s not really any cost to additional cost to additional connections in HTTP 2. So I think it still remains to be seen how much benefit it provides in the real world with the ongoing evolution of some of these technologies.
David is right; BigPipe is written for a HTTP 1.1 world, because BigPipe is intended to benefit as many end users as possible.
And around 52:00, Josh then made these observations:
It’s really great that BigPipe is in Drupal core because it’s the kind of thing that if you’re building your application from scratch that you might have to do a six month refactor to even make possible. And the cache layer that supports it, can support lots other interesting things that we’ll be able to develop in the future on top of Drupal 8. […] I would also say that I think the number of cases where BigPipe or ESI are actually called for is very very small. I always whenever we talk about these really hot awesome bleeding-edge cache technologies, I kinda want to go back to what Nate said: start with your Page Cache, figure out when and how to use that, and figure out how to do all the fundamentals of performance before even entertaining doing any of these cutting-edge technologies, because they’re much trickier to implement, much more complex and people sometimes go after those things first and get in over their head, and miss out on a lot of the really big wins that are easier to get and will honestly matter a lot more to end users. “Stop thinking about ESI, turn on your block cache.”
Josh is right too, BigPipe is not a silver bullet for all performance problems; definitely ensure your images and JS are optimized first. But equating BigPipe with ESI is a bit much; ESI is indeed extremely tricky to set up. And … Drupal 8 has always cached blocks by default. :)
Finally, around 53:30 David cites another reason to stress why more sites are not handling authenticated traffic:
[…] things like commenting often move to tools like Disqus and whether you want to use Facebook or the Google+ ones or any one of those kind of options; none of those require dynamic interaction with Drupal.
Also true, but we’re now seeing the inverse movement, with the increased skepticism of trusting social media giants, not to mention the privacy (GDPR) implications. Which means sites that have great performance for dynamic/personalized/uncacheable responses are becoming more important again.BigPipe’s goal
David and Josh were being constructively critical; I would expect nothing less! :)
But in their description and subsequent questioning of BigPipe, I think they forget its two crucial strengths:
BigPipe works on any server, and is therefore available to everybody, and it works for many things out of the box, including f.e. every uncacheable Drupal block!
Bringing this optimization that sits at the intersection of front-end & back-end performance to the masses rather than having it only be available for web giants like Facebook and LinkedIn is a big step forward in making the entire web fast.
Using BigPipe does not require writing a single line of custom code; the module effectively progressively enhances Drupal’s HTML rendering — and turned on by default since Drupal 8.5!Conclusion
Like Josh and David say: don’t forget about performance fundamentals! BigPipe is no silver bullet. If you serve 100% anon traffic, BigPipe won’t make a difference. But for sites with auth traffic, personalized and uncacheable blocks on your Drupal site are streamed automatically by BigPipe, no code changes necessary:
(That’s with 2 slow blocks that take 3 s to render. Only one is cacheable. Hence the page load takes ~6 s with cold caches, ~3 s with warm caches.)
During this year and at several events SANDCamp, DrupalCamp LA, DrupalCon Nashville, and DrupalCamp Colorado I had a chance to talk and show how at WeKnow we approached the development of API driven applications. For all of you that use Drupal, this is something like decoupled or headless Drupal but without the Drupal part.
This article outlines weKnow’s approach and provides some insight into how we develop some web applications.
Yes, this may sound strange but whenever we need to build an application that is not content-centric, we use Symfony instead of Drupal; what are those cases? Whenever we do not require the out-of-the-box functionality that Drupal offers as content management, content revision workflow, field widgets/formatters, views, and managing data structure from the UI (content types).jmolivas Tue, 11/06/2018 - 17:17
Pattern Lab (PL), a commonly known pattern library, is an open-source project to generate a design system for your site. In the last two years it has gotten a lot of attention in the Drupal community. It's a great way to implement a design system into your front-end workflow.
The following post describes how our client (the City and County of San Francisco) began to implement a pattern library that will eventually be expanded upon and re-used for other agency websites across the SF.gov ecosystem.USWDS.
Using the U.S. Web Design System (USWDS), until their own pattern library was ready for prime time, was a client requirement.
This is part 2 in this series that explores how to use paragraph bundles to store configuration for dynamic content. The example I built in part 1 was a "read next" section, which could then be added as a component within the flow of the page. The strategy makes sense for component-based sites and landing pages, but probably less so for blogs or content heavy sites, since what we really want to for each article to include the read next section at the end of the page. For that, a view that displays as a block would perfectly suffice. In practice, however, it can be really useful to have a single custom block type, which I often call a "component block", that has an entity reference revisions field that we can leverage to create reusable components.
Here is where we bring awareness to Drupal modules running on less than 1% of reporting sites. Today we'll investigate Admin Denied, a module which prevents you from accessing the super user's account.
Drupal training is happening around the world, and we're getting ready for 2019 now. The common purpose of DrupalGTD is to introduce newcomers to Drupal and our community in a locally organized event, either in-person or online.
Drupal training event in 2018 with FFW in Albany, NY
Mark your calendars for the following dates and if you would like to host a training event, there's a place to do it now for maximum lead time.
We'll be celebrating GTD all month during February, April, June, September, and December, but we also have target dates.2019 Drupal GTD dates
- February 7-9
- April 18-20
- June 27-29
- September 12-14
- December 5-7
In 2019, the dates are expanding to Thursday-Saturday, rather than only Friday-Saturday. By including Thursdays, we're encouraging hosts to experiment by offering events on different days to see what works better in your locale.Have questions about getting started?
VisualN provides an interface to check "how it works" for any available drawer on the site. To see the list of drawers go to VisualN -> Available Drawers Preview menu item.Available drawers list
Though VisualN allows you to use any resource type as data source (e.g. csv, xls files or views), for demo purposes it is enough to have some dummy data. Such data can be obtained from data generators. Data generators are simple plugins returning an array of data (which is just another resource type) of a given structure that can be used by certain drawers (e.g. Leaflet uses lat, lon and title data fields).
Data generators may also provide info about drawer or drawers that can use generated data. Those drawers and data generators are considered compatible. Drawers highlighted green have compatible data generators.
There are a couple of use cases when you may want to use the Available drawers preview UI:
- check drawer in action, examine configuration form settings
- set configuration values to create a visualization style
- use the preview UI to help drawer development and to test changes
- check data format used by drawers (e.g. using table drawer)
This week we talked with David Valdez. Read about what impact Drupal made on him, what contribution is he the proudest of and what Drutopia is.READ MORE
- Adopt the PSR-12 standard for PHP7 return types once Drupal 8 drops PHP 5 support
- Provide standard around type hinting
- Explicitly disallow yoda conditions needs coder love for some automation
For the past two North American DrupalCons, my presentations have focused on introducing people to the Webform module for Drupal 8. First and foremost, it’s important that people understand the primary use case behind the Webform module, within Drupal's ecosystem of contributed modules, which is to…
The other important message I include in all my presentations is…
Over the past two years, between presentations, screencasts, blog posts, and providing support, the Webform module has become very robust and feature complete. Only experienced and advanced Drupal developers have been able to fully tap into the flexibility and openness of the Webform module.
The flexibility and openness of Drupal
Drupal's 'openness' stems from the fact that the software is Open Source; every line of code is freely shared. The Drupal's community's collaborative nature does more than just 'share code'. We share our ideas, failures, successes, and more. This collaboration leads to an incredible amount of flexibility. In the massive world of Content Management Systems, 'flexibility' is what makes Drupal stand apart from its competitors.
Most blog posts and promotional material about Drupal's flexibility reasonably omits the fact that Drupal has a steep learning curve. Developers new to Drupal struggle to understand entities, plugins, hooks, event subscribers, derivatives, and more until they have an ‘Aha’ moment where they realize how ridiculously flexible Drupal is.
The Webform module also has a steep learning curve
The Webform module's user experience focuses on making it easy for people to start building fairly robust forms quickly, including the ability to edit the YAML source behind a form. This gives users a starting point to understanding Drupal's render and form APIs. As soon as someone decides to peek at...Read More
BADCamp 2018 was the first real big event I attended, aside from actively participating in Drupal Camp Costa Rica for three years. Kindly enough some co-workers who had already assisted shared with me their experience which gave me great expectations. In addition, I was excited to sightsee San Francisco and Berkeley.
After dedicating this year to front-end, BADCamp sessions left me more than satisfied, with refreshed knowledge and practices. So I would like to share my experience and the content of sessions I participated:
The second day was a highlight, assistants were given challenges and tools, dialogue tables enriched my personal experience by listening to others talk about ways to improve development applications.hjuarez Mon, 11/05/2018 - 12:48
Team AdWeb has worked for a distinctive list of industries counting from hospitability to technology and retailers to an online lottery purchase system based website. Yes, we recently collaborated with a Japan-based company to build their website with lottery purchase system, using Drupal 8. We’ve been Drupal-ing even before our inception and have been an active member of the Drupal community, globally. Our association and experience of Drupal were the base of the client’s immense faith in us and we knew that we’re going to stand true to that.
About the Project
The project requirement of the client was to build a website for them in Drupal 8. The website is basically an online lottery purchase system. Due to confidential reasons, we can not share the name of the company/client but would like to share that the experience of working on this project was new and enriching.
We personally love experimenting and implementing innovative features to enhance the client’s website. Plus, we get a little more excited when its a Drupal 8 website. We integrated a host of futuristic features to this very website too. But since, it’s an online lottery purchase system we knew that the integration of the Payment Gateway is going to be one of an integral part. Hence, we created three types of Payment Gateway, as follows:\
The user is an integral part of this entire online lottery system and hence several functionalities are crafted around them. Like, a user can purchase coins by WebMoney Payment method and can also buy lottery from choosing any product bundle. A user also has an option to select the quantity of the product or go for the complete set. The payment for either of it can be done by the coins, GMO credit card or points.
Draw system is used for the selection of the lottery winner. Other than the lottery prize, the user also stands a chance to win the Kiriban Product as a prize. The Kiriban Product is based on the product bundle configuration, which is an additional product that a user gets as defined by an admin user.
Any e-commerce website will definitely have multiple users buying for the same product. In this situation, the backend technicalities should be as such that it updates the quantity left of the product after the last purchase is made. Issues occur when two or more users place the order at the same time. This is an issue that is involved in concurrent shopping. In this case, the lottery opened for some specific time. Hence, the issue occurred in showcasing the updated quantity. This problem came to our notice when the site went live and around 7-8 users made the transaction at one specific time. We immediately started working on the issue.
We quickly picked up the problem and started searching for the resolution. We have had several times, prior to this, created an e-commerce website. Hence, we used multiple methods to resolve the issues, mentioned below, but none of them worked in this particular case.
Initially, we tried using a Drupal lock to resolve the issue, but in vain.
We, later on, used the MySQL lock but this too didn’t work, due to the involvement of multiple quantities inside for loop.
The usage of sleep time with random sleep time also did not work, because it created the nearby value and not the exact one.
Though the method of random sleep time did not work in this case, it gave birth to the final resolution that worked. And hence, we did a minor modification to the same and divided the sleep time in a range of 3. Also, to avoid the possibility of any further clash, we adopted a table of 100.
The Final Resolution:
After trying out a handful of methods, we finally came up with a method that did work out in our favor. Let us share what steps did finally help us in addressing the problem of concurrent shopping that we faced:
A table consisting of 1 to 100 numbers was taken, with the sleep time by a range of 3.
Later, a random number was picked and a flag value for the same was set
Then, a greater number from those numbers with the range of 3 was picked
Below is the table that was created to bring out the final solution:, ,
‘Flag’ was used to 0 by default, which will be automatically set to 1 every time the number is in use
How it works:
At the beginning of the transaction, the max sleep_time will be checked where flag=1
The sleep_time for the first user will be 0
After this, a random number from max sleep_time is selected with a range of 3
The first user’s range is 1-3
In the case of the second user, one number will be skipped after the max time and will be started after that number
In case a user gets the max sleep_time in 3 then the range for the random number will be 5-7
If the second user gets the random number as 6 then the random number range for the third user will be 8-10
The flag value will be updated as 1 for this random number
In the end, the flag value of the transaction will be updated with 0
The Final Say:
“All is well, that ends well.” And that’s exactly we have to say for this particular project. Yes, though we had coded and created many e-commerce websites before, this was the first time that we picked up a project to create a Drupal 8 website with an online lottery system. And believe us, it was a monumental success for us and satisfying project for the client.
A machine learning model, that could lead a driver directly to an empty parking spot, fetched the second prize in the Graduate level: MS category at the 2018 Science and Technology Open House Competition. It goes without saying that dreams of computer systems with godlike powers and the wisdom to use them is not just a theological construct but a technological possibility. And sci-fi éminence grise Arthur C. Clarke rightfully remarked that “any sufficiently advanced technology is indistinguishable from magic.”
Machine learning predates computers!
Artificial Intelligence (AI) may be the buzzword of our times but Machine Learning (ML) is really the brass tacks. Machine learning has made great inroads into different areas. It has the capability of looking at the pictures of biopsies and picking out possible cancers. It can be taught to predict the outcome of legal cases, writing press releases and even composing music! However, the sci-fi future where a machine learning beats a human in all the conceivable department and is perpetually learning isn’t a reality yet. So, how does machine learning fit into the world of content management system like Drupal? Before finding that out, let’s go back to the times when computers did not even exist.
In this day and age, self-driving cars, voice-activated assistants and social media feed are some of the tools which are powered by machine learning. Compilations made by BBC and Forbes show that machine learning has a long timeline that relies on mathematics from hundreds of years ago and the elephantine developments in computing over the years.Machine learning has a long timeline that relies on mathematics from hundreds of years ago and the elephantine developments in computing over the years
Mathematical innovations like Bayes’ Theorem (1812), Least Squares method for data fitting (1805) and Markov Chains (1913) laid the foundation for modern machine learning concept.
In the late 1940s, stored-program computers like Manchester Small-Scale Experimental Machine (1948) came into the picture. Through the 1950s and 1960s, several influential discoveries were made like the ‘Turing Test’, first computer learning program, first neural network for computers and the ‘nearest neighbour’ algorithm. In the nineties, IBM’s Deep Blue beat the world chess champion.
Post-millennium, we have several technology giants like Google, Amazon, Microsoft, IBM and Facebook today actively working on more advanced machine learning models. Proof of this is the Alpha algorithm, developed by Google DeepMind, which beat a professional in the Go competition and it is considered more intricate than chess!Discovering Machine Learning
Machine learning is a form of AI that allows a system to learn from data instead of doing that through explicit programming. It is not a simple process. As the algorithms ingest training data, producing more accurate models based on that data is possible.Advanced machine learning algorithms are composed of many technologies (such as deep learning, neural networks and natural-language processing), used in unsupervised and supervised learning, that operate guided by lessons from existing information. - Gartner
When you train your machine learning algorithm with data, the output that is generated is the machine learning model. After training, when you provide an input to the model, an output will be given to you. For instance, a predictive algorithm will build a predictive model. Then, when the predictive model is provided with the data, you receive a prediction based on the data that trained the model.Difference between AI and machine learning Source: IBM
Machine learning may have relished a massive success of late but it is just one of the approaches for achieving artificial intelligence.
Forrester defines artificial intelligence as “the theory and capabilities that strive to mimic human intelligence through experience and learning”. AI systems generally demonstrate traits like planning, learning, reasoning, problem solving, knowledge solving, social intelligence and creativity among others.
Alongside machine learning, there are numerous other approaches used to build AI systems such as evolutionary computation, expert systems etc.
Machine learning is generally divided into the following categories:
- Supervised learning: It typically begins with an established set of data and with a certain understanding of the classification of that data is done and intends to find patterns in data for applying that to an analytics process.
- Unsupervised learning: It is used when the problem needs a large amount of unlabeled data.
- Reinforcement learning: It is a behavioural learning model. The algorithm receives feedback from the data analysis thereby guiding the user to the best outcome.
- Deep learning: It incorporates neural networks in successive layers for learning the data in an iterative manner.
Today, the majority of enterprises require descriptive analytics, that is needed for efficient management, but not sufficient to enhance business performance. For the businesses to scale higher level of responsiveness, they need to move beyond descriptive analytics and move up the intelligence capability pyramid. This is where machine learning plays a key role.For the businesses to scale higher level of responsiveness, they need to move beyond descriptive analytics and move up the intelligence capability pyramid.
Machine learning is not a new technique but the interest in the field has grown multifold in recent years. For enterprises, machine learning has the ability to scale across a broad range of businesses like manufacturing, financial services, healthcare, retail, travel and many others.Source: Tata Consultancy Services
Business processes directly related to revenue-making are among the most-valued applications like sales, contract management, customer service, finance, legal, quality, pricing and order fulfilment.
Exponential data growth with unstructured data like social media posts, connected devices sensing data, competitor and partner pricing and supply chain tracking data among others is one of the reasons of why adoptions rates of machine learning have skyrocketed.
The Internet of Things (IoT) networks, connected devices and embedded systems are generating real-time data which is great for optimising supply chain networks and increasing demand forecast precision.
Another reason why machine learning is successful because of its ability to generate massive data sets through synthetic means like extrapolation and projection of existing historical data to develop realistic simulated data.
Moreover, the economics of safe and secure digital storage and cloud computing are merging to put infrastructure costs into free fall thereby making machine learning more cost effective for all the enterprises.
A session at DrupalCon Baltimore 2017 had a presentation which was useful for machine learning enthusiasts and it did not require any coding experience. It showed how to look at data from the eye view of a machine learning engineer.
It also leveraged deep learning and site content to give Drupal superpowers by making use of same technology that is exploding at Facebook, Google and Amazon.
The demonstration focused on mining Drupal content as the fuel for deep learning. It showed when to use existing ML models or services when to build your own, deployment of ML models and using them in production. It showed free pre-built models and paid services from Amazon, IBM, Microsoft, Google and others.
Drag and drop interface was used for creating, training and deploying a simple ML model to the cloud with the help of Microsoft Azure ML API. Google Speech API was used to turn spoken audio content into the text content to use them with chatbots and virtual assistants. Watson REST API was leveraged to perform sentiment analysis. Google Vision API module was used so that uploaded images can add Face, Logo, and Object Detection. And Microsoft’s ML API was leveraged to automatically build summaries from node content.
Another session at DrupalCon Baltimore 2017 showed how to personalise web content experiences on the basis of subtle elements of a person’s digital persona.
Standard personalisation approaches recommend content on the basis of a person’s profile or the past activity. For instance, if a person is searching for a gym bag, something like this works - “Here are some more gym bags”. Or if he or she is reading about movie reviews, this would work - “Maybe you would like this review of the recently released movie”.
But the demonstration shown at this session had advanced motives. They exhibited Deep Feeling, a proof-of-concept project that utilises machine learning techniques doing better recommendations to the users. This proof-of-concept recommended travel experiences on the basis of kind of things a person shares with the help of Acquia Lift service and Drupal 8.
With the help of Instagram API to access a person’s stream-of-consciousness, the demo showed that their feeds were filtered via a computer-vision API and was used to detect and learn subtle themes about the person’s preferences. Once a notion on what sort of experiences, which the person thinks are worth sharing, is established, then the person’s characteristics were matched against their own databases.
Another presentation held at Bay Area Drupal Camp 2018 explored how the CMS and Drupal Community can put machine learning into practice by leveraging a Drupal module, taxonomy system and Google’s Natural Language Processing API.
Natural language processing concepts like sentiment analysis, entity analysis, topic segmentation, language identification among others were discussed. Numerous natural language processing API alternatives were compared like Google’s natural language processing API, TextRazor, Amazon Comprehend and open source solutions like Datamuse.
It explored use cases by assessing and automatically categorising news articles using Drupal’s taxonomy system. Those categories were merged with the sentiment analysis in order to make a recommendation system for a hypothetical news audience.Future of Machine learning
A report on Markets and Markets states that the machine learning market size will grow from USD 1.41 Billion in 2017 to USD 8.81 Billion by 2022 at a Compound Annual Growth Rate (CAGR) of 44.1%.
The report further states that the major driving factors for the global machine learning market are the technological advancement and proliferation in data generation. Moreover, increasing demand for intelligent business processes and the aggrandising adoption rates of modern applications are expected to offer opportunities for more growth.
Some of the near-term predictions are:
- Most applications will include machine learning. In a few years, machine learning will become part of almost every other software applications with engineers embedding these capabilities directly into our devices.
- Machine learning as a service (MLaaS) will be a commonplace. More businesses will start using the cloud to offer MLaaS and take advantage of machine learning without making huge hardware investments or training their own algorithms.
- Computers will get good at talking like humans. As technology gets better and better, solutions such as IBM Watson Assistant will learn to communicate endlessly without using code.
- Algorithms will perpetually retrain. In the near future, more ML systems will connect to the internet and constantly retrain on the most relevant information.
- Specialised hardware will be delivering performance breakthroughs. GPUs (Graphics Processing Unit) is advantageous for running ML algorithms as they have a large number of simple cores. AI experts are also leveraging Field-Programmable Gate Arrays (FPGAs) which, at times, can even outclass GPUs.
Whether computers start ruling us someday by gaining superabundance of intelligence is not a likely outcome. Even though it is a possibility which is why it is widely debated whenever artificial intelligence and machine learning is discussed.
On the brighter side, machine learning has a plenitude of scope in making our lives better with its tremendous capabilities of providing unprecedented insights into different matters. And when Drupal and machine learning come together, it is even more exciting as it results in the provision of awesome web experience.
Opensense Labs always strives to fulfil digital transformation endeavours of our partners with a suite of services.
Contact us at email@example.com to know how machine learning can be put to great to use in your Drupal web application.blog banner blog image Machine Learning Drupal Machine Learning Machine Learning and Drupal Drupal Drupal 8 Supervised learning Unsupervised learning Deep Learning Artificial Intelligence AI Reinforcement learning web personalisation Blog Type Articles Is it a good read ? On
Jeff Geerling's Blog: Drupal startup time and opcache - faster scaling for PHP in containerized environments
Lately I've been spending a lot of time working with Drupal in Kubernetes and other containerized environments; one problem that's bothered me lately is the fact that when autoscaling Drupal, it always takes at least a few seconds to get a new Drupal instance running. Not installing Drupal, configuring the database, building caches; none of that. I'm just talking about having a Drupal site that's already operational, and scaling by adding an additional Drupal instance or container.
One of the principles of the 12 Factor App is:
Maximize robustness with fast startup and graceful shutdown.
Disposability is important because it enables things like easy, fast code deployments, easy, fast autoscaling, and high availability. It also forces you to make your code stateless and efficient, so it starts up fast even with a cold cache. Read more about the disposability factor on the 12factor site.
This blog has been re-posted and edited with permission from Dries Buytaert's blog. Please leave your comments on the original post.
Configuration management is an important feature of any modern content management system. Those following modern development best-practices use a development workflow that involves some sort of development and staging environment that is separate from the production environment.
Given such a development workflow, you need to push configuration changes from development to production (similar to how you need to push code or content between environments). Drupal's configuration management system helps you do that in a powerful yet elegant way.
Since I announced the original Configuration Management Initiative over seven years ago, we've developed and shipped a strong configuration management API in Drupal 8. Drupal 8's configuration management system is a huge step forward from where we were in Drupal 7, and a much more robust solution than what is offered by many of our competitors.
All configuration in a Drupal 8 site — from one-off settings such as site name to content types and field definitions — can be seamlessly moved between environments, allowing for quick and easy deployment between development, staging and production environments.
However, now that we have a couple of years of building Drupal 8 sites behind us, various limitations have surfaced. While these limitations usually have solutions via contributed modules, it has become clear that we would benefit from extending Drupal core's built-in configuration management APIs. This way, we can establish best practices and standard approaches that work for all.
The four different focus areas for Drupal 8. The configuration management initiative is part of the 'Improve Drupal for developers' track.
I first talked about this need in my DrupalCon Nashville keynote, where I announced the Configuration Management 2.0 initiative. The goal of this initiative is to extend Drupal's built-in configuration management so we can support more common workflows out-of-the-box without the need of contributed modules.
What is an example workflow that is not currently supported out-of-the-box? Support for different configurations by environment. This is a valuable use case because some settings are undesirable to have enabled in all environments. For example, you most likely don't want to enable debugging tools in production.
The contributed module Config Filter extends Drupal core's built-in configuration management capabilities by providing an API to support different workflows which filter out or transform certain configuration changes as they are being pushed to production. Config Split, another contributed module, builds on top of Config Filter to allow for differences in configuration between various environments.
The Config Split module's use case is just one example of how we can improve Drupal's out-of-the-box configuration management capabilities. The community created a longer list of pain points and advanced use cases for the configuration management system.
While the initiative team is working on executing on these long-term improvements, they are also focused on delivering incremental improvements with each new version of Drupal 8, and have distilled the most high-priority items into a configuration management roadmap.
- In Drupal 8.6, we added support for creating new sites from existing configuration. This enables developers to launch a development site that matches a production site's configuration with just a few clicks.
- For Drupal 8.7, we're planning on shipping an experimental module for dealing with environment specific configuration, moving the capabilities of Config Filter and the basic capabilities of Config Split to Drupal core through the addition of a Configuration Transformer API.
- For Drupal 8.8, the focus is on supporting configuration updates across different sites. We want to allow both sites and distributions to package configuration (similar to the well-known Features module) so they can easily be deployed across other sites.
There are many opportunities to contribute to this initiative and we'd love your help.
If you would like to get involved, check out the Configuration Management 2.0 project and various Drupal core issues tagged as "CMI 2.0 candidate".
From all of us on the BADCamp organizing team, a huge thank you to the many volunteers, speakers, trainers, masseuses, waffle-makers, and our 1300+ registered attendees for making BADCamp a must-attend event, year in and year out!
You are the ones who build and grow the community, we just provide the rooms.Watch (and re-watch) Sessions
Thanks to the heroic efforts of our volunteers (shout out to @kevinjthull), we have posted recordings for most of our sessions.
Help us make next year's BADCamp even better. Take two minutes to submit your thoughts on our survey.
If you left something behind by mistake, we may have it! Don't give up. Read our post with a list of the things left behind.Sponsors
A BIG thanks Platform.sh, Pantheon & DDEV and all our sponsors. Without them this magical event wouldn’t be possible./p>See You Next Year!
Until then, the best way to keep in touch with us is to follow @badcamp on twitter, where our intemperate social media team likes to leak event details way way in advance.
I am currently building a Drupal 8 application which is running outside Acquia Cloud, and I noticed there are a few 'magic' settings I'm used to working on Acquia Cloud which don't work if you aren't inside an Acquia or Pantheon environment; most notably, the automatic Configuration Split settings choice (for environments like local, dev, and prod) don't work if you're in a custom hosting environment.
You have to basically reset the settings BLT provides, and tell Drupal which config split should be active based on your own logic. In my case, I have a site which only has a local, ci, and prod environment. To override the settings defined in BLT's included config.settings.php file, I created a config.settings.php file in my site in the path docroot/sites/settings/config.settings.php, and I put in the following contents:
What's your favorite tool for creating content layouts in Drupal? Paragraphs, Display Suite, Panelizer or maybe Panels? Or CKEditor styles & templates? How about the much talked about and yet still experimental Drupal 8 Layout Builder module?
Have you "played” with it yet?
As Drupal site builders, we all agree that a good page layout builder should be: