Drupal Planet

Subscribe to canal de noticias Drupal Planet
Drupal.org - aggregated feeds in category Planet Drupal
Actualizado: hace 1 hora 51 segs

Dropsolid: James & Jenny, our toolbox for faster Drupal development

Jue, 11/09/2017 - 05:30
09 Nov James & Jenny, our toolbox for faster Drupal development Nick Vanpraet Tech Drupal 8

Be aware, this is a longread with extensive value. Only read this if you are ready to uncover our Dropsolid team's exciting dev tool and platform secrets!

 

James & Jenny might sound more like a comedy double act or the protagonists of an long-forgotten tale, but they are in fact very much alive and kicking. They are the names we gave to the platforms that we developed in-house to spin up environments faster and get work done more efficiently. How? Read on!

 

In practice

Whenever we want to spin up a new server, start a new project or even create a new testing environment, we still rely on our infrastructure team. A while ago we managed to automate our build pipeline with some smart configuration of Jenkins, an open source piece of software. Combined with a permission system, we are already able to let technical clients or consultants participate in the development process of a site by triggering a build of an environment. We decided to call this home-coded piece of software James, our in-house Drupal Cloud Butler. However, this UI was very cluttered and it was easy to break the chain. Maintenance-wise, it wasn’t the friendliest system either. James 0.1 was very helpful, but needed polishing.

Behind the scenes we started building a proper platform that was designed to supersede this existing system and take over the creation of new servers, projects and environments by adding a layer on top of this - a layer that could talk to Jenkins and would be able to execute Ansible playbooks through a managed system via RabbitMQ. You could see this as James 0.2. This version of James only has one account and isn’t built with a great many permissions in mind. Its purpose is very simple: get stuff done. This means we still can’t let clients or internal staff create new environments on James directly or set up new projects. But we’d really like to.

This is why we’re currently also investing heavily in the further development of Jenny, the site-spinning machine. Jenny’s aim is to be a user-friendly layer on top of James and it consists of of two parts: a loosely decoupled Angular application consuming a Drupal 8 backend exposed through a REST API, which in turn talks to James through its REST API. Because Jenny makes sure only calls that are allowed go through to James, James can stay focused on functionality without having to add a ton of logic to make sure the request is valid. If the person who wants that new environment isn’t allowed to request one, Jenny won’t ask James to set it up in the first place.

 

How it works

 

A Jenny user will be able to create a new organization, and within that organization create new projects or clone existing ones. These projects can be housed on our servers or on external hosting (with or without VPN, Firewalls or anything else that’s required). They’ll be able to create new environments, archive entire projects or just a single environment, build, back up, restore, sync across environments, log in to an environment’s site, etc. It will even contain information about the health of the servers and also provide analytics about the sites themselves.

Now, because single-person organisations are rather non-existent, that user will be able to add other users to their organization and give them different permissions based on their actual role within the company. A marketeer doesn’t need to know the health of a feature-testing environment, and a developer has little use in seeing analytics about the live environment.

The goal of this permission system is to provide the client enough options that they can restrict a developer from archiving live but allow them to create a new testing environment and get all needed information and access for that environment. On a sidenote: these aren’t standard Drupal permissions, because those are for members within an organization, and a single user can be a part of many organizations and have different permissions for each one.

 

End-to-end

But all these layers have to be able to talk to each other before any of that can happen. JennyA(ngular) has to talk to JennyB(ackend), JennyB then has to make sure the request is valid and talk to James. And whatever information James returns, has to be checked by JennyB, stored in the database if needed, and then transformed into a message that JennyA can do something with.

To make sure we can actually pull this off, we created the following test case:

How do we trigger a build of an environment in Jenkins from JennyA, and how do we show the build log from Jenkins in JennyA?

JennyA: build the page, get project and environment info from JennyB, create a button and send request to API. How this process happens exactly, will be explained in a different post.

JennyB

For this REST resource we need two entities: Project and Environment.
We create some new permissions (defined as options in an OrgRole entity) for our Environment entity type:

  • Create environment
  • Edit environment
  • Delete environment
  • Archive environment
  • View environment
  • View archived environment
  • Build environment

Next to this, we build a custom EntityAccessControlHandler that checks these custom permissions. An AccessControlHandler must have two methods: checkAccess() and checkCreateAccess(). In both we want to make sure Drupal’s normal permissions (which for this entity we reduce to simply ‘manage project environment entities’) still rule supreme, so superadmins can debug everything. Which is why both access checks start with a normal, bog-standard $account->hasPermission() check.

if ($account->hasPermission('administer project environment entities')) { return AccessResult::allowed(); }

But then we have to add some extra logic to make sure the user is allowed to do whatever it is they’re attempting to do. For that we grab that user’s currently active Membership. A Membership is a simple entity that combines a user, an organization, and an OrgRole entity which says what permissions the user has within that organization. For non-Create access we first check if this user is even a part of the same organization as the entity they’re trying to save.

// Get the organization for this project environment $organization = $entity->getProject()->getOrganization(); // Check that the active membership and the attached organization match $accessResult = Membership::checkIfAccountIsPartOfCorrectOrganization($organization, $account); if ($accessResult->isForbidden()) { return $accessResult; }

For brevity’s sake, I won’t explain how exactly checkIfAccountIsPartOfCorrectOrganization does its checks. But it returns an AccessResultInterface object and does exactly what it says on the tin. It also includes a reason for forbidding access, so we can more easily debug problems. You can just add a string to the creation of an AccessResult or use $accessResult->setReason() and you can then grab it using $accessResult->getReason(). Take note: only forbidden and neutral implement that method. Make sure the result implements the AccessResultReasonInterface before calling either method.

if ($accessResult instanceof AccessResultReasonInterface) { $accessResult->getReason(); }

We use this extensively with our unit testing, so we know exactly why something fails.
Assuming our test passes, we can finally check if this user has the correct permissions.

$entityOrganizationMembership = User::load($account->id())->getActiveMembership(); switch ($operation) { case 'view': if (!$entity->isActive()) { return $this->allowedIf($entityOrganizationMembership->hasPermission('view archived project environment'), 'member does not have "view archived project environment" permission'); } return $this->allowedIf($entityOrganizationMembership->hasPermission('view project environment'), 'member does not have "view project environment" permission'); case 'update': case 'delete': case 'archive': case 'build': return $this->allowedIf($entityOrganizationMembership->hasPermission($operation . ' project environment'), 'member does not have "' . $operation . ' project environment" permission'); } // Unknown operation, no opinion. return AccessResult::neutral('No operation matches found for operation: ' . $operation);

As you might have noticed, normally when you load a User you don’t get a getActiveMembership() method. But we extended the base Drupal User class and added it there. We also set that new class as the default class for the User entity, which is actually very easy:

function hook_entity_type_build(&$entity_types) { if (isset($entity_types['user'])) { $entity_types['user']->setClass('Drupal\my_module\Entity\User); } }

Now loading a user returns an instance of our own class.

For createAccess() things get trickier, because at that point the entity doesn’t exist yet.This makes it impossible to check if it’s part of the correct organization (or in this case, the correct project, which is in turn part of an organization). So here we’ll have to also implement a field level Constraint on the related project field. This article explains how to create a field level Constraint.

In this Constraint we can do our Membership::checkIfAccountIsPartOfCorrectOrganization check and be sure nobody will be able to save an environment to a project for an organization they are not a part of, regardless if they are creating one or saving one (somehow having bypassed our access check). To make doubly sure, we also set the $validationRequired property on our Environment class to true. This way entities will always demand to be validated first. If they are not, or they have errors, an exception will be thrown.

Now we can finally build our rest resource. Since a Jenkins build doesn’t exist as a custom entity within JennyB (yet), we create a custom REST resource. We use Drupal console for this and set the canonical path to “/api/project_environment/{project_environment}/build/{id}” and the “create” path to “/api/project_environment/{project_environment}/build”. We then create another resource and set that one’s canonical to “/api/project_environment/{project_environment}/build”, the same as our first resource’s “create” path. This way, when you POST to that path you trigger a new build and when you GET you receive a list of all builds for that environment. We have to split this off into two resources, because each resource can only use each method once.


We generate these resources using Drupal console. But before we can begin with our logic proper, we have to make sure the ProjectEnvironment entity gets automatically loaded. For this we need to extend the routes method from the parent class.

public function routes() { $collection = parent::routes(); // add our paramconverter to all routes in the collection // if we could only add options to a few routes, we would have // to loop over $collection->all() and add them to specific ones. // Internally, that is exactly what the addOptions method does anyway $options['parameters']['project_environment'] = [ 'type' => 'entity:project_environment', 'converter' => 'paramconverter.entity' ]; $collection->addOptions($options); return $collection; }

In the routes method you can add or remove options and requirements to your heart’s content. Whatever you can normally do in a routes.yml file, you can also do here. We've explained this in more detail in this blog post.

Let’s take a closer look at our create path. First we’ll need to make sure the user is allowed to build. Luckily thanks to our custom access handler this is very easy.

// check if user can build $entity_access = $projectEnvironment->access('build', NULL, TRUE); if (!$entity_access->isAllowed()) { // if it’s not allowed, we know it’s a forbidden or neutral response which implements the Reason interface. throw new AccessDeniedHttpException($entity_access->getReason()); }

Now we can ask James to trigger the build.

// Talk to James $data['key'] = self::VALIDATION_KEY; $url = self::API_URL . '/project/' . $projectEnvironment->getProject() ->getRemoteProjectID() . '/environment/' . $projectEnvironment->getRemoteEnvironmentID() . '/build'; $response = $this->httpClient->request('POST', $url, array('json' => $data)); $responseData = json_decode($response->getBody()->getContents(), TRUE);

For this test we use a simple key that James uses for authentication and build the URL in our REST resource. Eventually this part will be moved to a library and the code might look something like this:

$remoteProjectID = $projectEnvironment->getProject()->getRemoteProjectID(); $remoteEnvironmentID = $projectEnvironment->getRemoteEnvironmentID(); $response = $this->jamesConnection->triggerNewBuild($remoteProjectID, $remoteEnvironmentID, $data); $responseData = json_decode($response->getBody()->getContents(), TRUE);

We check the data we get back and if everything has gone well, we can update our local ProjectEnvironment entity with the new currently deployed branch.

if ($response->getStatusCode() == 200 && $data['key'] !== $projectEnvironment->getCurrentlyDeployedBranch()) { // Everything went fine, so also update the $projectEnvironment to reflect what // the currently deployed branch is $projectEnvironment->setCurrentlyDeployedBranch($data['branch']); // validate the entity $violations = $projectEnvironment->validate(); foreach ($violations as $violation) { $errors[] = $violation->getMessage(); } if (isset($errors)) { throw new BadRequestHttpException("Entity save validation errors: " . implode("\n", $errors)); } // save it $projectEnvironment->save(); }

Running validate is necessary, because we set the $validationRequired property to TRUE for our entity type. If something goes wrong, including our custom Constraints, we throw a Bad Request exception and output the validation errors.

Then we simply return what James gave us.

return new ResourceResponse($responseData, $response->getStatusCode());

On James’ end, it’s mostly the same but instead of checking custom access handlers, we (for now) just validate the key. And James in turn calls Jenkins’ API. This will also change, and James will hand off the build trigger to RabbitMQ. But for the purpose of this test, we communicate with Jenkins directly.

James then returns the ID of the newly triggered build to JennyB, who returns it to JennyA. JennyA then uses that ID to call JennyB’s canonical Build route with the given ID until success or failure has occurred.

 

Curious to read more interesting Drupal-related tidbits? Check out the rest of our blog. Or simply stay up to date every three months and subscribe to our newsletter!

myDropWizard.com: Using lots of different tools? Do it all in Drupal instead!

Jue, 11/09/2017 - 03:43

You need a website. You need to send an e-mail newsletter. You need to track (potential) volunteers, donors, or customers. You could use Drupal, Mailchimp and HubSpot. Or you could do it all in Drupal.

We've been using the tools above in our own organization, and we continue to use them. Yet, we've been toying with the idea of moving more of our daily usage to a more Drupal based solution. I'll try to outline some of the pros and cons of each approach. I think you'll see for many organizations the Drupal solution could end-up on the winning side of the decision!

The Heavyweight Single Purpose Tools

We've used a number of we based services at myDropWizard to help keep sales, projects, and customer communication on track.

I'll outline just a few that we use that are very popular. that would make for a good comparison with a Drupal solution.

MailChimp

Currently, we use MailChimp for newsletters. I think MailChimp is a champion product with low prices and great features. MailChimp is probably the most used email newsletter platform, so it's strengths are well known.

Elevated Third: Marketing Automation, Meet Drupal

Mié, 11/08/2017 - 18:06
Marketing Automation, Meet Drupal Marketing Automation, Meet Drupal Andy Mead Wed, 11/08/2017 - 13:06

Oh, hi there. I’d be lying if I said I wasn’t expecting you. This is a blog after all. And supposedly people read these things, which is, supposedly, why you’re here. So pull up a seat (if you’re not already sitting) and I’ll tell you why Drupal is a great partner for Marketing Automation.

Ah, Marketing Automation. (Hereafter MA, because why read 7 syllables when you can read 2?) It’s arguably the most hyped business technology of the last decade or so, spoken about in hushed tones, as though simply subscribing to a platform will print money for you. Sadly that’s not the truth. But when used properly with digital strategy, it’s pretty good at what it does: capturing latent demand and turning it into sales. The tricky part is the modifying clause that opened the last sentence, “when used properly.”

What to expect from Marketing Automation?

Marketing Automation tools and platforms these days come loaded with bells and whistles: from custom reporting engines to fancy drag-n-drop campaign UIs, and WYSIWYGs that let marketers build digital assets like landing pages and emails. And yet, despite all that fanciness, it’s still really hard to do Marketing Automation right. Why? Well, leaving aside strategic questions (a massive topic on its own), my own experience with MA always left me wanting two things - expressibility and scalability.

Drupal + Marketing Automation

While publishing workflows in Marketing Automation, tools have improved over the years. They still can’t compete with a CMS; particularly one as powerful as Drupal. Drupal empowers users to express content in terms that go far beyond simple landing pages.

In fact, Drupal is used today for just about anything you can imagine, from powering Fortune 500 marketing websites to running weather.com and acting as the backbone of custom web applications. What’s possible with Drupal is really up to you. Just ask the guy who built it.

So, fine. Drupal is great and everything. But how does it help your marketing? Well, because Drupal is so flexible, you can integrate it with almost anything:  like Google Analytics, Pardot, Marketo, Eloqua, Salesforce, and on, and on, and on. In a quickly changing technology landscape that’s an incredible strength because it acts as the nervous system for your marketing technology stack.

“Marketing technology stack?” Yeah, I don’t like business jargon, either. But, it’s a helpful way to think about digital marketing tools. Because they are just that: tools with strengths and weaknesses. You probably wouldn’t use a screwdriver to drive a nail into the wall. Sure, you could, but there’s a better tool for the job: a hammer. Likewise, your MA platform could power all your digital assets, but there’s a better tool for that job, too: Drupal.

The right tools for the job

In my experience, organizing these tools around their strengths brings better results. And here at Elevated Third, we’ve done that by connecting Drupal to Marketing Automation platforms like Pardot, Marketo, and SharpSpring; using it as the front end for services that are powering marketing programs. And moreover, MA is only a piece of that puzzle. Want to use something like HotJar? Drupal is happy to.

Open source means flexibility 

So where does this flexibility come from? Drupal is Open Source Software and there’s a massive developer community that improves it daily. Probably the strength of open source software is its flexibility.

You don’t like the way something works? Easy. Let’s change it.

Is something broken? No problem, let’s fix it.

Got a new problem that off-the-shelf solutions don’t solve? Well, then, let’s built a solution for it.

Is Drupal the right tool for every job? I’d be lying (again) if I said it was. But it’s the right tool for jobs that require unique, flexible solutions. And it could be the right tool for your job, too. If you are curious, let's talk

Cheeky Monkey Media: The Drupal Checklist Every Developer Needs

Mié, 11/08/2017 - 17:49
The Drupal Checklist Every Developer Needs cody Wed, 11/08/2017 - 19:49

Are you almost finished setting up your Drupal website? At a glance, everything might look ready to go.

But, before you hit "publish," you need to make sure you haven't made any mistakes.

A writer proofreads before they post an article. Similarly, a developer should double check their work.

The last thing you want is to go live with your site and have something go wrong. Finding problems before you launch can save some headaches and embarrassment.

We've compiled a pre-launch, Drupal checklist. When it's complete, you'll rest easy knowing that your website is ready to go.

Security

Security is the first on this Drupal checklist because it's so important. Of course you want to rest easy knowing that your site is secure when it launches. You also want your users to have peace of mind knowing that their information is safe.

Double checking your site's security will ensure that there's nothing you've missed that could make you vulnerable to hackers.

Evolving Web: Profiling and Optimizing Drupal Migrations with Blackfire

Mié, 11/08/2017 - 17:34

A few weeks ago, us at Evolving Web finished migrating the Princeton University Press website to Drupal 8. The project was over 70% migrations. In this article, we will see how Blackfire helped us optimize our migrations by changing around two lines of code.

Before we start
  • This article is mainly for PHP / Drupal 8 back-end developers.
  • It is assumed that you know about the Drupal 8 Migrate API.
  • Code performance is analyzed with a tool named Blackfire.
  • Front-end performance analysis is not in the scope of this article.
The Problem

Here are some of the project requirements related to the problem. This would help you get a better picture of what's going on:

  • A PowerShell script exports a bunch of data into CSV files on the client's server.
  • A custom migration plugin PUPCSV uses the CSV files via SFTP.
  • Using hook_cron() in Drupal 8, we check hashes for each CSV.
  • If a file's MD5 hash changes, the migration is queued for import using the Drupal 8 Queue API.
  • The CSV files usually have 2 types of changes:
    • Certain records are updated here and there.
    • Certain records are added to the end of the file.
  • When a migration is executed, migrate API goes line-by-line, doing the following things for every record:
    • Read a record from the data source.
    • Merge data related to the record from other CSV files (kind of an inner join between CSVs).
    • Compute hash of the record and compare it with the hash stored in the database.
    • If a hash is not found in the database, the record is created.
    • If a hash is found and it has changed, the record is updated.
    • If a hash is unchanged, no action is taken.

While running migrations, we figured out that it was taking too much time for migrations to go through the CSV files, simply checking for changes in row hashes. So, for big migrations with over 40,000 records, migrate was taking several minutes to reach the end of file even on a high-end server. Since we were running migrate during cron (with Queue Workers), we had to ensure that any individual migration could be processed below the 3 minute PHP maximum execution time limit available on the server.

Analyzing migrations with Blackfire

At Evolving Web, we usually analyze performance with Blackfire before any major site is launch. Usually, we run Blackfire with the Blackfire Companion which is currently available for Google Chrome and Firefox. However, since migrations are executed using drush, which is a command line tool, we had to use the Blackfire CLI Tool, like this:

$ blackfire run /opt/vendor/bin/drush.launcher migrate-import pup_subjects Processed 0 items (0 created, 0 updated, 0 failed, 0 ignored) - done with 'pup_subjects' Blackfire Run completed

Upon analyzing the Blackfire reports, we found some 50 unexpected SQL queries being triggered from somewhere within a PUPCSV::fetchNextRow() method. Quite surprising! PUPCSV refers to a migrate source plugin we wrote for fetching CSV files over FTP / SFTP. This plugin also tracks a hash of the CSV files and thereby allows us to skip a migration completely if the source files have not changed. If the source hash changes, the migration updates all rows and when the last row has been migrated, we store the file's hash in the database from PUPCSV::fetchNextRow(). As a matter of fact, we are preparing another article about creating custom migrate source plugin, so stay tuned.

We found one database query per row even though no record was being created or updated. Didn't seem to be very harmful until we saw the Blackfire report.

Code before Blackfire

Taking a closer look at the RemoteCSV::fetchNextRow() method, a call to MigrateSourceBase::count() was found. It was found that the count() method was taking 40% of processing time! This is because it was being called for every row in the CSV. Since the source/cache_counts parameter was not set to TRUE in the migration YAML files, the count() method was iterating over all items to get a fresh count for each call! Thus, for a migration with 40,000 records, we were going through 40,000 x 40,000 records and the PHP maximum execution time was being reached even before migrate could get to the last row! Here's a look at the code.

protected function fetchNextRow() { // If the migration is being imported... if (MigrationInterface::STATUS_IMPORTING === $this->migration->getStatus()) { // If we are at the last row in the CSV... if ($this->getIterator()->key() === $this->count()) { // Store source hash to remember the file as "imported". $this->saveCachedFileHash(); } } return parent::fetchNextRow(); }Code after Blackfire

We could have added the cache_counts parameter in our migration YAML files, but any change in the source configuration of the migrations would have made migrate API update all records in all migrations. This is because a row's hash is computed as something like hash($row + $source). We did not want migrate to update all records because we had certain migrations which sometimes took around 7 hours to complete. Hence, we decided to statically cache the total record count to get things back in track:

protected function fetchNextRow() { // If the migration is being imported... if (MigrationInterface::STATUS_IMPORTING === $this->migration->getStatus()) { // Get total source record count and cache it statically. static $count; if (is_null($count)) { $count = $this->doCount(); } // If we are at the last row in the CSV... if ($this->getIterator()->key() === $count) { // Store source hash to remember the file as "imported". $this->saveCachedFileHash(); } } return parent::fetchNextRow(); }Problem Solved. Merci Blackfire!

After the changes, we ran Blackfire again and found things to be 52% faster for a small migration with 50 records.

For a bigger migration with 4,359 records the migration import time reduced from 1m 47s to only 12s which means a 98% improvement. Asking why we didn't include the screenshot for the bigger migration? We did not (or rather could not) generate a report for the big migration because of two reasons:

  • While working, Blackfire stores function call and other information to memory. Running a huge migration with Blackfire might be a bit slow. Besides, our objective was to find the problem and we could do that more easily while looking at smaller figures.
  • When running a migration with thousands of rows, the migration functions are called over thousands of times! Blackfire collects data for each of these function calls, hence, the collected data sometimes becomes too heavy and Blackfire rejects the huge data payload with an error message like this:
The Blackfire API answered with a 413 HTTP error () Error detected during upload: The Blackfire API rejected your payload because it's too big.

Which makes a lot of sense. As a matter of fact, for the other case study given below, we used the --limit=1 parameter to profile code performance for a single row.

A quick brag about another 50% Improvement?

Apart from this jackpot, we also found room for another 50% improvement (from 7h to 3h 32m) for one of our migrations which was using the Touki FTP library. This migration was doing the following:

  • Going through around 11,000 records in a CSV file.
  • Downloading the files over FTP when required.

A Blackfire analysis of this migration revealed something strange. For every row, the following was happening behind the scenes:

  • If a file download was required, we were doing FTP::findFileByName($name).
  • To get the file, Touki was:
    • Getting a list of all files in the directory;
    • Creating File objects for every file;
    • For every file object, various permission, owner and other objects were created.
    • Passing all the files through a callback to see if it's name was $name.
    • If the name was matching, the file was returned and all other File objects were discarded.

Hence, for downloading every file, Touki FTP was creating 11,000 File objects of which it was only using one! To resolve this, we decided to use a lower-level FTP::get($source, $destination) method which helped us bypass all those 50,000 or more objects which were being created per record (approximately, 11,000 * 50,000 or more for all records). This almost halved the import time for that migration when working with all 11,000 records! Here's a screenshot of Blackfire's report for a single row.

So the next time you think something fishy is going on with code you wrote, don't forget to use use Blackfire! And don't forget to leave your feedback, questions and even article suggestions in the comments section below.

More about Blackfire

Blackfire is a code profiling tool for PHP which gives you nice-looking reports about your code's performance. With the help of these reports, you can analyze the memory, time and other resources consumed by various functions and optimize your code where necessary. If you are new to Blackfire, you can try these links:

Apart from all this, the paid version of Blackfire lets you set up automated tests and gives you various recommendations for not only Drupal but various other PHP frameworks.

Next Steps
  • Try Blackfire for free on a sample project of your choice to see what you can find.
  • Watch video tutorials on Blackfire's YouTube channel.
  • Read the tutorial on creating custom migration source plugins written by my colleague (coming soon).
+ more awesome articles by Evolving Web

Lullabot: Styling the WYSIWYG Editor in Drupal 8

Mié, 11/08/2017 - 14:42

Drupal 8 ships with a built-in WYSIWG editor called CKEditor. It’s great to have it included in core, but I had some questions about how to control the styling. In particular, I wanted the styling in the editor to look like my front-end theme, even though I use an administration theme for the node form. I spent many hours trying to find the answer, but it turned out to be simple if a little confusing.

In my example, I have a front-end theme called “Custom Theme” that extends the Bootstrap theme. I use core’s “Seven” theme as an administration theme, and I checked the box to use the administration theme for my node forms. 

My front end theme adds custom fonts to Bootstrap and uses a larger than normal font, so it’s distinctively different than the standard styling that comes with the WYSIWYG editor. 

Front End Styling undefined WYSIWYG Styling

Out of the box, the styling in the editor looks very different than my front-end theme. The font family and line height are wrong, and the font size is too small.

undefined

It turns out there are two ways to alter the styling in the WYSIWYG editor, adding some information to the default theme’s info.yml file, or implementing HOOK_ckeditor_css_alter() in either a module or in the theme. The kicker is that the info changes go in the FRONT END theme, even though I’m using an admin theme on the node form.

I added the following information to my default theme info file, custom_theme.info.yml. The font-family.css and style.css files are the front-end theme CSS files that I want to pass into the WYSIWYG editor. Even if I select the option to use the front-end theme for the node form, the CSS from that theme will not make it into the WYSIWYG editor without making this change, so this is necessary whether or not you use an admin theme on the node form!  

name: "Custom Theme" description: A subtheme of Bootstrap theme for Drupal 8. type: theme core: 8.x base theme: bootstrap ckeditor_stylesheets: - https://fonts.googleapis.com/css?family=Open+Sans - css/font-family.css - css/style.css libraries: ... WYSIWYG Styling

After this change, the font styles in the WYSIWYG editor match the text in the primary theme.

undefined

When CKEditor builds the editor iframe, it checks to see which theme is the default theme, then looks to see if that theme has values in the info.yml file for ckeditor_stylesheets. If it finds anything, it adds those CSS files to the iframe. Relative CSS file URLs are assumed to be files in the front-end theme’s directory, or you can use absolute URLs to other files.

The contributed Bootstrap module does not implement ckeditor_stylesheets, so I had to create a sub-theme to take advantage of this. I always create a sub-theme anyway, to add in the little tweaks I want to make. In this case, my sub-theme also uses a Google font instead of the default font, and I can also pass that font into the WYSIWYG editor.

TaDa!

That was easy to do, but it took me quite a while to understand how it worked. So I decided to post it here in case anyone else is as confused as I was.

More Information

To debug this further and understand how to impact the styling inside the WYSIWYG editor, you can refer to the relevant code from two files in core, ckeditor.module:  

/** * Retrieves the default theme's CKEditor stylesheets. * * Themes may specify iframe-specific CSS files for use with CKEditor by * including a "ckeditor_stylesheets" key in their .info.yml file. * * @code * ckeditor_stylesheets: * - css/ckeditor-iframe.css * @endcode */ function _ckeditor_theme_css($theme = NULL) { $css = []; if (!isset($theme)) { $theme = \Drupal::config('system.theme')->get('default'); } if (isset($theme) && $theme_path = drupal_get_path('theme', $theme)) { $info = system_get_info('theme', $theme); if (isset($info['ckeditor_stylesheets'])) { $css = $info['ckeditor_stylesheets']; foreach ($css as $key => $url) { if (UrlHelper::isExternal($url)) { $css[$key] = $url; } else { $css[$key] = $theme_path . '/' . $url; } } } if (isset($info['base theme'])) { $css = array_merge(_ckeditor_theme_css($info['base theme']), $css); } } return $css; }

and Plugin/Editor/CKEditor.php:  

/** * Builds the "contentsCss" configuration part of the CKEditor JS settings. * * @see getJSSettings() * * @param \Drupal\editor\Entity\Editor $editor * A configured text editor object. * @return array * An array containing the "contentsCss" configuration. */ public function buildContentsCssJSSetting(Editor $editor) { $css = [ drupal_get_path('module', 'ckeditor') . '/css/ckeditor-iframe.css', drupal_get_path('module', 'system') . '/css/components/align.module.css', ]; $this->moduleHandler->alter('ckeditor_css', $css, $editor); // Get a list of all enabled plugins' iframe instance CSS files. $plugins_css = array_reduce($this->ckeditorPluginManager->getCssFiles($editor), function($result, $item) { return array_merge($result, array_values($item)); }, []); $css = array_merge($css, $plugins_css); $css = array_merge($css, _ckeditor_theme_css()); $css = array_map('file_create_url', $css); $css = array_map('file_url_transform_relative', $css); return array_values($css); }

Valuebound: Enabling custom web font in Drupal website

Mié, 11/08/2017 - 10:21

This blog will walk you through one the contributed module in Drupal community that has been a heave of sigh for me whenever I was in trouble for web building activity. A couple of weeks back, I have been assigned a task where the requirement was to enable ‘Benton-sans Regular’ font throughout the site. Initially, I thought it would be an easy task and can be done easily. But I was wrong.

No issues! If you facing similar difficulties. Here, I am going to discuss how you can enable ‘Benton-sans Regular’ font seamlessly using Drupal font-your-face module.

Flocon de toile | Freelance Drupal: Change the position of the meta data panel on the node form with Drupal 8

Mié, 11/08/2017 - 08:00
Content metadata (menu settings, publishing options, url path settings, and so on) are by default displayed on the node form in a side panel. This has the advantage of giving immediate visibility on these options while writing its content. But there are use cases where the lateral position of these informations is detrimental to the general ergonomics, because reducing the space available for the content form. This can be the case, for example, if you use the Field Group module to structure and group the information you need to enter. No need here for a Drupal expert. Let's find out how we can make the position of these metadata customizable according to the needs and general ergonomics of the Drupal 8 project.

Agiledrop.com Blog: AGILEDROP: Top Drupal blogs from October

Mié, 11/08/2017 - 06:56
The October is over, so it's time we present you top Drupal blogs written in October by other authors.  Let's start with How to maintain Drush commands for Drush 8 and 9 and Drupal console with the same code base by Fabian Bircher from Nuvole. He shows us that the solution is actually really simple, it is all about separating the command discovery from the command logic. Check it out! Our second choice is Decoupled Drupal Hard Problems: Image Styles by Mateu Aguiló Bosch from Lullabot. He shows us the problems when back-end doesn't know anything about the front-end design. He presents a… READ MORE

Savas Labs: The cost of investing in Drupal 7 - why it's time for Drupal 8

Mar, 11/07/2017 - 22:00

In the second of a two-part series, we investigate Drupal 8's present value and help highlight sometimes hidden costs of developing on an older platform. Continue reading…

Morpht: Announcing Enforce Profile Field for Drupal 8

Mar, 11/07/2017 - 21:42


The Enforce Profile Field is a new module which allows editors to enforce the completion of one or more fields in order to access content on a site. It is now available for Drupal 8.

Sometimes you need to collect a variety of profile data for different users. The data may be needed for regulatory compliance or marketing reasons. In some cases you need a single field and in others it may be several. You may also wish to collect the information when a user access certain parts of the site.

The Enforce Profile Field module comes to the rescue in cases such as these, forcing users to complete their profile before being able to move onto the page they want to see. This may sound harsh, however, collecting data as you need it is a more subtle way of collecting data than enforcing it all at registration time.

The implementation consists mainly from a new Field Type called "Enforce profile" and hook_entity_view_alter().

The module works as follows
  1. Site builder defines a “form display” for the user type bundle and specify fields associated with it to collect data.
    1. The fields should not be required, as this allows the user to skip them on registration and profile editing.
    2. In addition the Form Mode Manager module can be used to display the “form display” as a "tab" on a user profile page.
  2. The site builder places an Enforce profile field onto an entity type bundle, such as a node article or page.
  3. The Enforce profile field requires some settings:
    1. A "User's form mode" to be utilized for additional field information extraction (created in the first step).
    2. An "Enforced view modes" that require some profile data to be filled in before being able to access them. You should usually select the "Full content" view mode and rather not include view modes like "Teaser" or "Search".
  4. The editor creates content, an article or page, and can then select which fields need to be enforced.
    1. The editor is provided with multi-select of "User's form mode" fields.
    2. Selecting nothing is equal to no access change, no profile data enforcement.
  5. A new user navigates to the content and is redirected to the profile tab and is informed that they need to complete the fields.
  6. Fields are completed, form submitted and the user redirected back to the content.
    1. In case the user doesn't provide all enforced fields, the profile tab is displayed again with the message what fields need to be filled in.
Why to use the Enforce Profile Field to collect an additional profile data?
  • You may need customer's information to generate a coupon or access token.
  • You may just want to know better with whom you share information.
  • Your users know exactly what content requires their additional profile data input rather than satisfying a wide range of requirements during registration. It just makes it easier for them.
  • The new profile data can be synced to a CRM or other system if required to.

Let us know what you think.
 

Acro Media: Video: How Commerce 2.x Makes Taxes Simple

Mar, 11/07/2017 - 19:44

 

 

Tax regulations can be ridiculously complicated, particularly in the U.S., but Drupal has your back. With more inclusions and better integrations out of the box, Commerce 2.x represents a significant improvement from Commerce 1.x. Watch this High5 video for details!

Commerce 2.x now includes:
  • Native integration with Avalara
    That means full integration for every region that Avalara handles. Integrations with Tax Cloud and TaxJar are also in the pipeline, so U.S.-based businesses will have a few different options.
  • Built-in tax rules for Canada and the EU (and more)
    These are now included right out of the box; no add-ons or third-party service required. As long as you stay up to date with your Commerce install, you will automatically get any new rules or changes. And if you sell to other countries, you can still build the tax rules and configure them yourself.
  • The ability to prescribe when a tax applies
    Besides being able to set what products a tax applies to and in what regions, you can now select when it applies. So if a tax rule is set to come into effect on January 1st, for instance, you can set that up way in advance and not have to be up at dawn on the big day to push a button. This functionality is also key when it comes to redoing old orders that were done under a different tax scheme.
As always, if you have questions about getting your site setup on Drupal Commerce 2, let us know! We'd love to help.

Agaric Collective: Conditional fields in Paragraphs using the Javascript States API for Drupal 8

Mar, 11/07/2017 - 12:03

While creating content, there are pieces of information that are only relevant when other fields have a certain value. For example, if we want to allow the user to upload either an image or a video, but not both, you can have another field for the user to select which type of media they want to upload. In these scenarios, the Javascript States API for Drupal 8 can be used to conditionally hide and show the input elements for image and video conditionally.

Note: Do not confuse the Javascript States API with the storage State API.

The basics: conditional fields in node forms

Let’s see how to accomplish the conditional fields behavior in a node form before explaining the implementations for paragraphs. For this example, let’s assume a content type has a machine name of article with three fields: field_image, field_video, and field_media_type. The field_image_or_video field is of type List (text) with the following values: Image and Video.

/** * Implements hook_form_alter(). */ function nicaragua_form_alter(&$form, \Drupal\Core\Form\FormStateInterface $form_state, $form_id) { if ($form_id == 'node_article_form' || $form_id == 'node_article_edit_form') { $form['field_ image']['#states'] = [ 'visible' => [ ':input[name="field_image_or_video"]' => ['value' => 'Image'], ], ]; $form['field_ video']['#states'] = [ 'visible' => [ ':input[name="field_image_or_video"]' => ['value' => 'Video'], ], ]; } }

Note that in Drupal 8, the node add and edit form have different form ids. Hence, we check for either one before applying the field states. After checking for the right forms to alter, we implement the fields’ states logic as such:

$form[DEPENDEE_FIELD_NAME]['#states'] = [ DEPENDEE_FIELD_STATE => [ DEPENDENT_FIELD_SELECTOR => ['value' => DEPENDENT_FIELD_VALUE], ], ];

DEPENDENT_FIELD_SELECTOR is a CSS selector to the HTML form element rendered in the browser. Not to be confused with a nested Drupal form structure.

Conditional fields in Drupal 8 paragraphs

Although hook_form_alter could be used in paragraphs as well, their deep nesting nature makes it super complicated. Instead, we can use hook_field_widget_form_alter to alter the paragraph widget before it is added to the form. In fact, we are going to use the widget specific hook_field_widget_WIDGET_TYPE_form_alter to affect paragraphs only.

For this example, let’s assume a content type has a machine name of campaign with an entity reference field whose machine name is field_sections. The paragraph where we want to apply the conditional logic has a machine name of embedded_image_or_video with the following fields: field_image, field_video, and field_image_or_video. The field_image_or_video field is of type List (text) with the following values: Image and Video.

/** * Implements hook_field_widget_WIDGET_TYPE_form_alter(). */ function nichq_field_widget_paragraphs_form_alter(&$element, \Drupal\Core\Form\FormStateInterface $form_state, $context) { /** @var \Drupal\field\Entity\FieldConfig $field_definition */ $field_definition = $context['items']->getFieldDefinition(); $paragraph_entity_reference_field_name = $field_definition->getName(); if ($paragraph_entity_reference_field_name == 'field_sections') { /** @see \Drupal\paragraphs\Plugin\Field\FieldWidget\ParagraphsWidget::formElement() */ $widget_state = \Drupal\Core\Field\WidgetBase::getWidgetState($element['#field_parents'], $paragraph_entity_reference_field_name, $form_state); /** @var \Drupal\paragraphs\Entity\Paragraph $paragraph */ $paragraph_instance = $widget_state['paragraphs'][$element['#delta']]['entity']; $paragraph_type = $paragraph_instance->bundle(); // Determine which paragraph type is being embedded. if ($paragraph_type == 'embedded_image_or_video') { $dependee_field_name = 'field_image_or_video'; $selector = sprintf('select[name="%s[%d][subform][%s]"]', $paragraph_entity_reference_field_name, $element['#delta'], $dependee_field_name); // Dependent fields. $element['subform']['field_image']['#states'] = [ 'visible' => [ $selector => ['value' => 'Image'], ], ]; $element['subform']['field_video']['#states'] = [ 'visible' => [ $selector => ['value' => 'Video'], ], ]; } } }

Paragraphs can be referenced from multiple fields. If you want to limit the conditional behavior you can check the name of the field embedding the paragraph using:

$field_definition = $context['items']->getFieldDefinition(); $paragraph_entity_reference_field_name = $field_definition->getName();

If you need more information on the field or entity where the paragraph is being embedded, the field definition (instance of FieldConfig) provides some useful methods:

$field_definition->getName(); // Returns the field_name property. Example: 'field_sections'. $field_definition->getType(); // Returns the field_type property. Example: 'entity_reference_revisions'. $field_definition->getTargetEntityTypeId(); // Returns the entity_type property. Example: 'node'. $field_definition->getTargetBundle(); // Returns the bundle property. Example: 'campaign'.

In Drupal 8 it is a common practice to use the paragraph module to replace the body field. When doing so, a single field allows many different paragraph types. In that scenario, it is possible that different paragraph types have fields with the same name. You can add a check to apply the conditional logic only when one specific paragraph type is being embedded.

$widget_state = \Drupal\Core\Field\WidgetBase::getWidgetState($element['#field_parents'], $paragraph_entity_reference_field_name, $form_state); $paragraph_instance = $widget_state['paragraphs'][$element['#delta']]['entity']; $paragraph_type = $paragraph_instance->bundle();

The last step is to add the Javascript states API logic. There are two important things to consider:

  • The paragraph widget are added under a subform key.
  • Because multiple paragraphs can be referenced from the same field, we need to consider the order (i.e. the paragraph delta). This is reflected in the DEPENDENT_FIELD_SELECTOR.
$element['subform'][DEPENDEE_FIELD_NAME]['#states'] = [ DEPENDEE_FIELD_STATE => [ DEPENDENT_FIELD_SELECTOR => ['value' => DEPENDENT_FIELD_VALUE], ], ];

When adding the widget, the form API will generate markup similar to this:

<select data-drupal-selector="edit-field-sections-0-subform-field-image-or-video" id="edit-field-sections-0-subform-field-image-or-video--vtQ4eJfmH7k" name="field_sections[0][subform][field_image_or_video]" class="form-select required" required="required" aria-required="true"> <option value="Image" selected="selected">Image</option> <option value="Video">Video> </select>

So we need a selector like select[name="field_sections[0][subform][field_image_or_video]"] which can be generated using:

$selector = sprintf('select[name="%s[%d][subform][%s]"]', $paragraph_field_name, $element['#delta'], $dependee_field_name);

By using $element['#delta'] we ensure to apply the conditional field logic to the proper instance of the paragraph. This works when a field allows multiple paragraphs, including multiple instances of the same paragraph type.

Warning: Javascript behavior does not affect user input

It is very important to note that the form elements are hidden and shown via javascript. This does not affect user input. If, for example, a user selects image and uploads one then changes the selection to video and sets one then both the image and video will be stored. Switching the selection from image to video and vice versa does not remove what the user had previous uploaded or set. Once the node is saved, if there are values for the image and the video both will be saved. One way to work around this when rendering the node is to toggle field visibility in the node Twig template. In my session "Twig Recipes: Making Drupal 8 Render the Markup You Want" there is an example on how to do this. Check out the slide deck and the video recording for reference.

What do you think of this approach to add conditional field logic to paragraphs? Let me know in the comments.

PreviousNext: Composing Docker Local Development: Networking

Lun, 11/06/2017 - 19:47
Share:

Its extremely important to have default values that you can rely on for local Drupal development, one of those is "localhost". In this blog post we will explore what is required to make our local development environment appear as "localhost".

by Nick Schuch / 7 November 2017

In our journey migrating to Docker for local dev we found ourselves running into issues with "discovery" of services eg. Solr/Mysql/Memcache.

In our first iteration we used linking, allowing our services to talk to each other, some downsides to this were:

  • Tricky to compose an advanced relationship, lets use PHP and PanthomJS as an example:
    • PHP needs to know where PhantomJS is running
    • PhantomJS needs to know the domain of the site that you are running locally
    • Wouldn't it be great if we could just use "localhost" for both of these configurations?
  • DNS entries only available within the containers themselves, cannot run utilities outside of the containers eg. Mysql admin tool

With this in mind, we hatched an idea.....

What if we could just use "localhost" for all interactions between all the containers.

  • If we wanted to access our local projects Apache, http://localhost (inside and outside of container)
  • If we wanted to access our local projects Mailhog, http://localhost:8025 (inside and outside of container)
  • If we wanted to access our local projects Solr, http://localhost:8983 (inside and outside of container)

All this can be achieved with Linux Network Namespaces in Docker Compose.

Network Namespaces

Linux Network Namespaces allow for us to isolate processes into their own "network stacks".

By default, the following happens when a container gets created in Docker:

  • Its own Network Namespace is created
  • A new network interface is added
  • Provided an IP on the default bridge network

However, if a container is created and told to share the same Network Namespace with an existing container, they will both be able to interface with each other on "localhost" or "127.0.0.1".

Here are working examples for both OSX and Linux.

OSX

  • Mysql and Mail share the PHP containers Network Namespace, giving us "localhost" for "container to container" communication.
  • Port mapping for host to container "localhost"
version: "3" services: php: image: previousnext/php:7.1-dev # You will notice that we are forwarding port which do not belong to PHP. # We have to declare them here because these "sidecar" services are sharing # THIS containers network stack. ports: - "80:80" - "3306:3306" - "8025:8025" volumes: - .:/data:cached db: image: mariadb network_mode: service:php mail: image: mailhog/mailhog network_mode: service:php

Linux

All containers share the Network Namespace of the users' host, nothing else is required.

version: "3" services: php: image: previousnext/php:7.1-dev # This makes the container run on the same network stack as your # workstation. Meaning that you can interact on "localhost". network_mode: host volumes: - .:/data db: image: mariadb network_mode: host mail: image: mailhog/mailhog network_mode: host Trade offs

To facilitate this approach we had to make some trade offs:

  • We only run 1 project at a time. Only a single process can bind to port 80, 8983 etc.
  • Split out the Docker Compose files into 2 separate files, making it simple for each OS can have its own approach.
Bash aliases

Since we split out our Docker Compose file to be "per OS" we wanted to make it simple for developers to use these files.

After a couple of internal developers meetings, we came up with some bash aliases that developers only have to setup once.

# If you are on a Mac. alias dc='docker-compose -f docker-compose.osx.yml' # If you are running Linux. alias dc='docker-compose -f docker-compose.linux.yml'

A developer can then run all the usual Docker Compose commands with the shorthand dc command eg.

dc up -d

This also keeps the command docker-compose available if a developer is using an external project.

Simple configuration

The following solution has also provided us with a consistent configuration fallback for local development.

We leverage this in multiple places in our settings.php, here is 1 example:

$databases['default']['default']['host'] = getenv("DB_HOST") ?: '127.0.0.1';
  • Dev / Stg / Prod environments set the DB_HOST environment variable
  • Local is always the fallback (127.0.0.1)
Conclusion

While the solution may have required a deeper knowledge of the Linux Kernel, it has yielded us a much simpler solution for developers.

How have you managed Docker local dev networking? Let me know in the comments below.

Tagged Docker, Drupal Development

Posted by Nick Schuch
Sys Ops Lead

Dated 7 November 2017

Add new comment

Hook 42: Hook 42 at New England Drupal Camp

Lun, 11/06/2017 - 18:21

We're super excited to attend New England Drupal Camp this year!

Aimee is honored to have been invited to be the keynote speaker this year. She'll be discussing inclusion and diversity in the community. In addition to Aimee's keynote, we are partnering up with our longtime friends at Lingotek to put together a hands-on multilingual workshop that covers Drupal 8 and an integration to Lingotek's Translation Management System.

Just in case that wasn't enough, we're also presenting a couple of sessions. One comparing the madness of the multilingual modules on Drupal 7 to the new and improved Drupal 8 multilingual approach. We will be presenting another session covering how ANYONE and EVERYONE can help contribute back to the Drupal project even if they aren't the most advance technical person

Wim Leers: Rendering & caching: a journey through the layers

Lun, 11/06/2017 - 16:11

The Drupal render pipeline and its caching capabilities have been the subject of quite a few talks of mine and of multiple writings. But all of those were very technical, very precise.

Over the past year and a half I’d heard multiple times there was a need for a more pragmatic talk, where only high-level principles are explained, and it is demonstrated how to step through the various layers with a debugger. So I set out to do just that.

I figured it made sense to spend 10–15 minutes explaining (using a hand-drawn diagram that I spent a lot of time tweaking) and spend the rest of the time stepping through things live. Yes, this was frightening. Yes, there were last-minute problems (my IDE suddenly didn’t allow font size scaling …), but it seems overall people were very satisfied :)

Have you seen and heard of Render API (with its render caching, lazy builders and render pipeline), Cache API (and its cache tags & contexts), Dynamic Page Cache, Page Cache and BigPipe? Have you cursed them, wondered about them, been confused by them?

I will show you three typical use cases:

  1. An uncacheable block
  2. A personalized block
  3. A cacheable block that you can see if you have a certain permission and that should update whenever some entity is updated

… and for each, will take you on the journey through the various layers: from rendering to render caching, on to Dynamic Page Cache and eventually Page Cache … or BigPipe.

Coming out of this session, you should have a concrete understanding of how these various layers cooperate, how you as a Drupal developer can use them to your advantage, and how you can test that it’s behaving correctly.

I’m a maintainer of Dynamic Page Cache and BigPipe, and an effective co-maintainer of Render API, Cache API and Page Cache.

Preview:

Slides: Slides with transcriptVideo: YouTubeConference: Drupalcon ViennaLocation: Vienna, AustriaDate: Sep 28 2017 - 14:15Duration: 60 minutesExtra information: 

See https://events.drupal.org/vienna2017/sessions/rendering-caching-journey-through-layers.

Attendees: 200

Evalutations: 4.6/5

Thanks for the explanation. Your sketches about the rendering process and how dynamic cache, page cache and big pipe work together ; are awesome. It is very clear no for me.


Best session for me on DC. Good examples, loved the live demo, these live demo’s are much more helpful to me as a developer then static slides. General comments, not related to the speaker. The venue was to small for this talk and should have been on a larger stage. Also the location next to the exhibition stands made it a bit noisy when sitting in the back.


Great presentation! I really liked the hand-drawn figure and live demo, they made it really easy to understand and follow. The speaking was calm but engaging. It was great that you were so flexible on the audience feedback.

ThinkShout: My First BADCamp

Lun, 11/06/2017 - 10:30

We’re fresh off of BADCamp (Bay Area Drupal Camp), and we’re eager to share our experience with you! If you’ve ever thought about going to one of the local Drupal Camps in your area, or attending BADCamp yourself, we hope our takeaways persuade you to seek this out as a professional development opportunity.

BADCamp is essentially three days of intense workshops and sessions for Drupal users to hone their skills, meet other open source contributors, and make valuable connections in the community. Amongst the ThinkShout team, two had never attended BADCamp before. We were eager to hear their perspective on the conference and their key takeaways.

Sessions they attended ranged from learning about component-based theming tools, object oriented php, module development, debugging JavaScript; to Drupal 9 and backward compatibility and the importance of upgrading to D8 now.

Let’s hear from Mario and Lui–I mean Amy and Jules, on what their first BADCamp experience was like!

Amy and Jules on Halloween. Costumes are not required at BADCamp.

What did you learn at BADCamp?

Amy: Component-based theming is a hot topic these days for those building sites due to a number of reasons. Here are a couple of them:

  • It encourages a DRY (Don’t Repeat Yourself) and more organized theming code base.
  • It decouples site building in such a way that backend and frontend developers can work on the site at the same time, rather than the backend code needing to be built first before the frontend developer can do their work.
  • It provides clients with an interactive experience of their site (including responsiveness) before the database and backend elements are hooked up to it. This allows the client more time to provide feedback in case they want to change behaviors before they’re completely built.

I also attended a session called: React, GraphQL, and Drupal. This talk was largely about an opportunity to create multiple suites using the same API. The team used “headless Drupal” (to serve as the API), React.js to build the sites, and GraphQL to explore data coming from the API in a much more direct and clear way. It seemed like a great solution for a tricky problem, in addition to giving this team the opportunity to learn and use cutting edge technologies - so much fun!

Jules: I learned a lot about the Drupal Community. This was my first BADCamp, and also my first Drupal conference. I was excited about how generous the community is with knowledge and tools, working together so we can succeed together.

I learned about some of the changes to Drupal releases from @Webchick’s talk (Drupal 9 and Backward Compatibility: Why Now is the Time to Upgrade to Drupal 8). If I keep up with the incremental point releases (ie: 8.x), upgrading to 9 should be pretty painless, which is a relief. Knowing the incremental releases will be coming out with a regular six month-ish cadence will make planning easier. I’m also excited about the new features in the works; including Layouts, Work Spaces, a better out of the box experience on first install, a better UI admin experience (possibly with React?).

What would you tell someone who is planing to attend BADCamp next year?

Amy: Definitely invest in attending the full-day sessions if they interest you. The information I took away from my Pattern Lab day was priceless, and I came back to ThinkShout excited and empowered to figure out a way to make component based theming part of our usual practice.

Jules: The full day sessions were a great way to dive into deeper concepts. It’s hard to fully cover a subject in a shorter session. It also helps to show up with an open mind. It’s impossible to know everything about Drupal, and there are so many tools available. It was valuable just meeting people and talking to them about their workflows, challenges, and favorite new tools.

Do you consider BADCamp to be better for networking, professional development, or both?

Amy: My big focus was on professional development. There were so many good training days and sessions happening that those filled my schedule almost entirely. Of course, attending sessions (and being a session speaker!) is a great way to network with like-minded people too.

Jules: My goal was to immerse myself in the Drupal community. Since I’m new to Drupal, the sessions were really valuable for me. Returning with more experience, that might not be the case. It was valuable to see new ideas being presented, challenged, discussed, and explored with mutual respect and support. We’re all in this together. Some talks were stronger than others, but every speaker had a nugget of gold I could take with me. It was encouraging to meet peers and to see all of the great work people are doing out in the world. It also served as a reminder that great strides can come from many small steps (or pushes)!

Make time to learn

It can be difficult to take time away from project work and dedicate yourself to two or three days of conferencing. But when you disconnect and dive into several days of leaning, it makes your contributions back at the office invaluable. As Jules commented to me after her first day of sessions, “it was like php church!”

Getting out of your usual environment and talking to other people opens your mind up to other ways of problem solving, and helps you arrive at solutions you otherwise wouldn’t get through sitting in your cubicle. We hope you’re inspired to go to a local Drupal Meetup or Camp – or even better, meet us at DrupalCon or NTC’s Drupal Day!

Agiledrop.com Blog: AGILEDROP: Why rejecting projects due to resourcing challenges is avoidable

Lun, 11/06/2017 - 08:49
Even though I have been with AGILEDROP for little over than three months now, I already found myself in a situation when two of our potential clients were on the verge of declining their clients. The reasons for that were different, I'll go into more detail later. The agencies we approached differed in size, one being bigger (more than 50 people) the other smaller (less than 10 people). And the challenges they faced were also different. As you will see we could help both of them, but in the end, only one of the agencies trusted us that we are capable of delivering.  From a simple… READ MORE

OSTraining: How to Highlight the Differences Detween Two Images with Zurb Twenty Twenty Module

Lun, 11/06/2017 - 06:41

Zurb TwentyTwenty module is mostly intended to highlight the difference between two images on a Drupal site. You certainly saw those advertising images for skin products, for example. 

They would present half of the face before applying the product and half of the face after applying it. Besides such comparisons, you can use this module for other purposes as well. In this tutorial, you will learn how Zurb TwentyTwenty module works.

Appnovation Technologies: My First Book - Drupal 8 Module Development (Or Where I Have Been Lately)

Lun, 11/06/2017 - 06:00
My First Book - Drupal 8 Module Development (Or Where I Have Been Lately) If you’ve been wondering where I’ve been and why I haven’t been writing any articles lately, I am here to put your mind at ease: I've been working heavily on my first book about Drupal, called Drupal 8 Module Development. And I am happy to announce that it has finally been published and is available for purch...

Páginas