Drupal Planet

Subscribe to canal de noticias Drupal Planet
Drupal.org - aggregated feeds in category Planet Drupal
Actualizado: hace 20 mins 35 segs

Drudesk: Drupal 8 for marketers: key benefits & useful modules

Mié, 08/14/2019 - 10:51

It‘s easy and enjoyable to create marketing campaigns, drive leads, and tell your brand’s story to the world if your website is on the right CMS. Drupal 8’s benefits will definitely impress any marketer. So let’s take a closer look at the greatness of Drupal 8 for marketers, see what makes it so valuable, and name a few useful modules.

Dropsolid: Debugging segmentation faults in Drupal

Mié, 08/14/2019 - 05:15
14 Aug

Segfaults in Drupal are not a common occurrence - but when they do pop up, they can pose some tricky challenges... Our DevOps Engineer Mattias Michaux sheds some light on how to debug segmentation faults.
 

As a Drupal web developer, it can be frightening to encounter a so-called segmentation fault or segfault. This is a type of failure raised by hardware with memory protection, notifying you that the software has attempted to access a restricted area of memory. This kind of fault is common in languages with low-level memory management, such as C. Coding in a scripting language like PHP usually implies that you will be spared from segfaults, but on rare occasions these kind of errors can still pop up. And if they do, they tend to leave no stack trace or meaningful error clue in the PHP log. This is because segfault errors originate in the layers located below the PHP engine. Let me talk you through a recent consulting project that involved a curious segmentation fault.
 

Segfault case study

Not so long ago, one of our clients started experiencing seemingly random ‘503 service unavailable’ errors, at random times on random pages. That’s plenty of randomness, without much of a clue to start from.
 

The segfaults had started occurring after an update from PHP 5.6 to PHP 7.2 and the switch from mod_php - PHP is executed as a module within the Apache process -  to PHP-FPM. Just to clarify: PHP runs as a standalone service that Apache connects to.
 

We tried to reproduce the errors on a copy of the site and infrastructure of the project. The segfault was only reproducible when we ran a scraping tool on the site. There was no apparent connection between the pages it occurred on, and the error came up with less than 1% of the total requests. Looking at the logs, there seemed to be no direct cause for this error.

 

 

[proxy_fcgi:error] [pid 10272] AH01067: Failed to read FastCGI header [proxy_fcgi:error] [pid 10272] (104)Connection reset by peer: AH01075: Error dispatching request to ****

Example of the unhelpful Apache error message

 

 

The message above only tells us that something went wrong, but it provides no indication as to what the cause might be. Next, the PHP-FPM log indicated a segmentation fault:

WARNING: [pool web] child 3824 exited on signal 11 (SIGSEGV) after 3.353763 seconds from start

 

The syslog entry didn’t turn out to be very helpful either:

kernel: [4734894.041892] traps: php-fpm7.2[3760] general protection ip:555ce4cb6342 sp:7ffccab418c8 error:0 kernel: [4734894.041897] in php-fpm7.2[555ce4a51000+411000]

 

To find out what the root cause might be, we needed to recompile PHP with debugging enabled. This allowed us to produce core dumps, which contain a full backtrace. A backtrace is a summary of how the program got to a particular point. It displays one line per frame for many frames, starting with the frame that is currently being executed. I suggest finding a guide like this one on how to compile your php with debugging enabled.
 

After compiling PHP, we copied the php.ini and pool config from the original PHP version to the freshly compiled one. Next, we altered the config, so now the PHP-FPM pool used a separate name and socket path, and we changed that socket in the vhost. After starting the new PHP-FPM instance and running the crawler to reproduce the issue, we quickly saw the same 503 errors showing up, but this time with a core dump. We started comparing the dumps and tried to find a pattern. Fortunately, most cases pointed to the same thing: the unserialize() function of large objects. This made us look for specific PHP bugs that involved unserializing or memory allocation. We found an interesting one - in fact, someone had encountered almost the same issue before.
 

We decided to try the same approach by disabling garbage collection in the settings file:

ini_set('zend.enable_gc', 0);

 

We reran the scraper for multiple hours and no issues occurred. There are other documented cases where PHP’s garbage collection interferes with Drupal’s processing of huge amounts of data and objects.  Because disabling the garbage collection globally could have negative effects on the resource consumption of the live site, we also looked for a way to only disable garbage collection partially. This is indeed possible by patching  includes/cache.inc so _cache_get_object doesn’t run gc while fetching data.
 

This puzzling problem took several hours and two people to solve, but in the end we managed to diagnose correctly and we implemented a solid solution. An interesting case that led us to explore unusual parts of Drupal - and possibly something to bear in mind next time your Drupal environment produces a similar type of error.
 

gdb /opt/php/7.2/sbin/php-fpm /tmp/coredump-php-fpm.28651 -ex "source /opt/php-src/php7.2-build/php-7.2.17/.gdbinit" -ex "zbacktrace" --batch | grep "\[0x" [0x7f2c2b4220e0] unserialize("O:4:"view":49:{s:8:"db_table";s:10:"views_view";s:10:"base_table";s:18:"commerce_line_item";s:10:"base_field";s:3:"nid";s:4:"name";s:25:"commerce_reports_products";s:3:"vid";s:0:"";s:11:"description";s:0:"";s:3:"tag";s:16:"commerce_reports";s:10:"human_nam...") [internal function] [0x7f2c2b421f40] DrupalDatabaseCache->prepareItem(object[0x7f2c2b421f90]) /home/project/www/includes/cache.inc:520 [0x7f2c2b421d80] DrupalDatabaseCache->getMultiple(reference) /home/project/www/includes/cache.inc:433 [0x7f2c2b421cf0] cache_get_multiple(reference, "cache_views") /home/project/www/includes/cache.inc:113 [0x7f2c2b421a70] _ctools_export_get_defaults_from_cache("views_view", array(24)[0x7f2c2b421ad0]) /home/project/www/sites/all/modules/contrib/ctools/includes/export.inc:746 [0x7f2c2b421390] _ctools_export_get_defaults("views_view", array(24)[0x7f2c2b4213f0]) /home/project/www/sites/all/modules/contrib/ctools/includes/export.inc:649 [0x7f2c2b4211a0] _ctools_export_get_some_defaults("views_view", array(24)[0x7f2c2b421200], array(1)[0x7f2c2b421210]) /home/project/www/sites/all/modules/contrib/ctools/includes/export.inc:783 [0x7f2c2b420210] ctools_export_load_object("views_view", "names", array(1)[0x7f2c2b420280]) /home/project/www/sites/all/modules/contrib/ctools/includes/export.inc:493 [0x7f2c2b420080] ctools_export_crud_load("views_view", "search_results") /home/project/www/sites/all/modules/contrib/ctools/includes/export.inc:81 [0x7f2c2b41ff50] views_get_view("search_results") /home/project/www/sites/all/modules/contrib/views/views.module:1683 [0x7f2c2b41fb50] views_block_view("-exp-search_results-page") /home/project/www/sites/all/modules/contrib/views/views.module:772 [0x7f2c2b41fa60] module_invoke("views", "block_view", array(1)[0x7f2c2b41fad0]) /home/project/www/includes/module.inc:934 [0x7f2c2b41f410] _block_render_blocks(array(0)[0x7f2c2b41f460]) /home/project/www/modules/block/block.module:911 [0x7f2c2b41f2d0] block_list("branding") /home/project/www/modules/block/block.module:690 [0x7f2c2b41f200] block_get_blocks_by_region("branding") /home/project/www/modules/block/block.module:319 [0x7f2c2b41eee0] block_page_build(reference) /home/project/www/modules/block/block.module:270 [0x7f2c2b41ecf0] drupal_render_page(reference) /home/project/www/includes/common.inc:5914 [0x7f2c2b41e570] drupal_deliver_html_page(array(1)[0x7f2c2b41e5c0]) /home/project/www/includes/common.inc:2761 [0x7f2c2b41e3d0] drupal_deliver_page(array(1)[0x7f2c2b41e420], "") /home/project/www/includes/common.inc:2634 [0x7f2c2b41e100] menu_execute_active_handler() /home/project/www/includes/menu.inc:542 [0x7f2c2b41e030] (main) /home/project/www/index.php:21

 

dropsolid8

Web Wash: Use Taxonomy Terms as Webform Options in Drupal 8

Mar, 08/13/2019 - 22:30

Webform has a pretty robust system for managing lists of options. When you create a select box, you can define its options just for that element, or use a predefined list. If you go to Structure, Webforms, Configurations and click on Options, you can see all the predefined options and you can create your own.

If you want to learn how to create your own predefined options check out our tutorial; How to Use Webform Predefined Options in Drupal 8.

One thing to be aware of is that all of these options are stored as config files, which makes perfect sense, it is configuration.

But what if you want editors to manage the options?

Depending on how you deploy Drupal sites if you change an option only on the production site, your change will be overridden the next time you deploy to production because you import all new configuration changes.

To work around this, you could look at using Webform Config Ignore.

Another way of managing options is by using the Taxonomy system. An editor would simply manage all the terms from the Taxonomy page and nothing will be stored in config files.

In this tutorial, you’ll learn how to create a select element which uses a taxonomy vocabulary instead of the standard options.

Hook 42: Exploring Contenta CMS

Mar, 08/13/2019 - 16:03
Exploring Contenta CMS Lindsey Gemmill Tue, 08/13/2019 - 19:03

TEN7 Blog's Drupal Posts: Case Study: Our Family Wizard

Mar, 08/13/2019 - 14:30
Website Redesign: A Professional Face for a Market Leader

When parents divorce or separate, the child often becomes the unwilling intermediary for communication. Our Family Wizard (OFW) is a mobile application for co-parenting exes that facilitates and tracks communication, helps coordinate child duties and stores important information.

Drupal blog: Increasing Drupal speaker diversity

Mar, 08/13/2019 - 14:23

This blog has been re-posted and edited with permission from Dries Buytaert's blog.

At Drupalcon Seattle, I spoke about some of the challenges Open Source communities like Drupal often have with increasing contributor diversity. We want our contributor base to look like everyone in the world who uses Drupal's technology on the internet, and unfortunately, that is not quite the reality today.

One way to step up is to help more people from underrepresented groups speak at Drupal conferences and workshops. Seeing and hearing from a more diverse group of people can inspire new contributors from all races, ethnicities, gender identities, geographies, religious groups, and more.

To help with this effort, the Drupal Diversity and Inclusion group is hosting a speaker diversity training workshop on September 21 and 28 with Jill Binder, whose expertise has also driven major speaker diversity improvements within the WordPress community.

I'd encourage you to either sign up for this session yourself or send the information to someone in a marginalized group who has knowledge to share, but may be hesitant to speak up. Helping someone see that their expertise is valuable is the kind of support we need in order to drive meaningful change.

Hook 42: Getting Started With Developer Environments

Mar, 08/13/2019 - 13:51
Getting Started With Developer Environments Lindsey Gemmill Tue, 08/13/2019 - 16:51

Acro Media: Using Drupal for Headless Ecommerce Customer Experiences

Mar, 08/13/2019 - 11:45

Back in April, BigCommerce, in partnership with Acro Media, announced the release of the BigCommerce for Drupal module. This module effectively bridges the gap between the BigCommerce SaaS ecommerce platform and the Drupal open source content management system. It allows Drupal to be used as the frontend customer experience engine for a headless BigCommerce ecommerce store.

For BigCommerce, this integration provides a new and exciting way to utilize their platform for creating innovative, content-rich commerce experiences that were not possible via BigCommerce alone.

For Drupal, this integration extends the options its users and site-builders have for adding ecommerce functionality into a Drupal site. The flexibility of Drupal combined with the stability and ease-of-use of BigCommerce opens up new possibilities for Drupal that didn’t previously exist.

Since the announcement, BigCommerce and Acro Media have continued to educate and promote this exciting new headless commerce option. A new post on the BigCommerce blog published last week title Leverage Headless Commerce To Transform Your User Experience with Drupal Ecommerce (link below) is a recent addition to this information campaign. The BigCommerce teams are experts in what they do and Acro Media is an expert in open source integrations and Drupal. They asked if we could provide an introduction for their readers to really explain what Drupal is and where it fits in to the headless commerce mix. This, of course, was an opportunity not to be missed and so our teams buckled down together once again to provide readers with the best information possible.

So without further explanation, click the link below to learn how you can leverage headless commerce to transform your user experience with Drupal.

Additional resources:

Drupal Association blog: Midwest Drupal Summit 2019

Mar, 08/13/2019 - 11:04

It's always wonderful to have Drupal community members gather in my hometown. Ann Arbor, Michigan, USA hosts the Midwest Drupal Summit (MWDS) every year in August since 2016. Previous MWDS events were held in Chicago, IL, Minneapolis, MN, and Madison, WI. This summit is three days of Drupal contribution, collaboration, and fun. The event is small but mighty. As with any contribution-focused Drupal event, MWDS is important because some of the most active community teams of the Drupal project dedicate time to work through challenges and celebrate what makes open source special.

I overheard several topics discussed over the three days.

  • Drupal.org infrastructure - its current state and ideas for improvements

  • Coordination between the Composer Initiative and core maintainers

  • The Automatic Updates initiative sprinting on package signing

  • Ideas for Contribution Credit system enhancements, to expand recognition beyond activity that takes place on Drupal.org

  • Drupal core contributors and the Drupal Association engineering team collaborating on critical Drupal 9 release blockers

  • The Security Team talking through ongoing work

  • Gitlab merge requests workflow and UI

  • Connecting the work that contributors are doing across various projects

  • Fun social events


Group lunch in Ann Arbor.

This opportunity to listen and overhear the thought and care that go into Drupal is one I appreciate. It was fantastic to hear a community member tell the Drupal Association team that they are "impressed with the gitlab work. I created a sandbox and the URL worked." It's one thing to see public feedback on the work done for the community, it's a whole other thing to hear it in person.


Contribution in several forms - talking through ideas, blockers, giving feedback and opinions are just a few ways to participate.

Local midwesterners who take the time to attend MWDS get an opportunity to dive in to contribution on a variety of topics. There are always mentors and subject-matter experts ready to help. My own Drupal core commit happened at a past MWDS - where I gave feedback on an issue from the perspective of a content editor. This year, Wilson S. had a first-time commit on issue #3008029 and usually there's at least one first-time commit. A memorable one being the time Megan (megansanicki) had her first commit, which was also a live commit by Angie Byron (webchick).

Here's what a few participants had to say about their experience:

"I feel inspired as I watch the local community and visitors organically interact and participate with the discussions held around them." ~ Matthew Radcliffe (mradcliffe)

“This was my first Drupal Code Sprint event. Meeting all the great people from near and afar in person was awesome. Matthew Radcliffe helped me overcome my apprehension of contributing to Drupal Core. I look forward to continuing contributing and connecting with the community.” ~ Phill Tran (philltran

"As a recent re-transplant back to Michigan, I wanted to get back in touch with my local Drupal community. Being a FE dev, sometimes it's hard to find things to contribute via core or porting of modules. Some of the content analysis and accessibility testing was really interesting to me. As someone who has not contributed in the past @mradcliff was an excellent teacher on how to get the sprint environment up and running and how to get started on issues." ~ Chris Sands (chrissands)

"As part of the Drupal Association staff, I find MWDS is always a wonderful opportunity to connect with some key community members and create more alignment between DA initiatives and community-driven work. It's also a wonderful time to catch up with Drupal family." ~ Tim Lehnen (hestenet)

"Always the best event of the year." ~ xjm

“I am glad to have met everyone. I had a one-on-one mentoring session with Matthew Radcliffe. It’s priceless!” ~ Wilson Suprapto (wilsonsp)

Did I mention that Chris also became a Drupal Association member during this event?! Thanks Chris!

The Drupal Association engineering team members are in daily contact with the community online. However, in-person events are serendipitous. The insight from community members who have expertise to help Drupal.org improve for everyone is right here in the room. New contributors need only consider that the first step is any move you make to participate. I think this timely tweet I saw over the weekend sums it up:

The great thing about open source is you can often just contribute. Jump in and help - there's tons of documentation help needed, but open issues, bugs, etc. Start small but why not start today?

— Nick Ruffilo (@NickRuffilo) August 10, 2019

Special thanks to Michael Hess for organizing this event and Neha & Danny from University of Michigan for making sure everyone had a fantastic time.


Tim at the Treeline, on a zipline!

For reference, here are the issues that moved forward during the event.

Specbee: AMP It Up! The Why and How of Drupal AMP (And what it can do to your website)

Mar, 08/13/2019 - 10:08
AMP It Up! The Why and How of Drupal AMP (And what it can do to your website) Shefali Shetty 13 Aug, 2019 Top 10 best practices for designing a perfect UX for your mobile app

When the open-source Accelerated Mobile Pages (AMP) project was launched in October 2015, Google AMP was often compared to Facebook's Instant Articles. Nonetheless, both of the tech-giants share a common goal – to make web pages load faster. While AMP can be reached with a web URL, Facebook’s Instant Articles aimed only at easing the pain for their app-users. Teaming up with some powerful launch partners in the publishing and technology sectors, Google AMP aimed to impact the future of content distribution on mobile devices.

Fast forward to today, and Google AMP is the hottest thing on the internet. With over 25 million website domains that have published over 4 Billion AMP pages, it did not take long for the project to be a huge success. Comprising of two main features; Speed and Support to Monetization of Objects, AMPs implications are far reaching for enterprise businesses, marketers, ecommerce and every other big and small organizations. With great features and the fact that its origin as a Google Initiative, it is no surprise that the AMP pages get featured in Google SERP more prominently. 

What is AMP?

With the rapid surge in mobile users, the need to provide a website-like user experience does not just cut it. Today mobile user’s come with a smaller attention-span and varied internet speeds. Businesses can cater to each of these challenge with a fast-loading, light-weight and an app-like website with Google AMP.

AMP is an open-source framework that simplifies the HTML, streamlines CSS rules, restricts use of Javascript (can use AMP’s component library instead) and delivers pages via a Google AMP cache (a proxy-based Content Delivery Network).

Why AMP??

Impacting the technical architecture of digital assets, Google's open source initiative aims to provide streamlined web pages to mobile browsers and other apps.

It is Fast, like Really Fast

Google AMP loads about twice as fast as a normal comparable mobile page and the latency is as less as one-tenth. Intended to provide the fastest experience for mobile users, customers will be able to access content faster, and they are more likely to stay on the page to make a purchase or enquire about your service, because they know it won't take long.

An Organic Boost

Eligibility for the AMP carousal that rests above the other search results on Google SERP, resulting in a substantial increase in organic result and traffic is a major boost for the visibility of an organization. Though not responsible for increasing the page authority and domain authority, Google AMP plays a key role in sending far more traffic your way.

ROI

The fact that AMP leverages and not disrupts the existing web infrastructure of a website, makes the cost of adopting AMP much lesser than the competing technologies. In return, Google AMP enables better user experience which translates to better conversion rates on mobile devices.

Drupal & AMP

With better user engagement, higher dwell time and easy navigation between content benefits, businesses are bound to drive more traffic with AMP-friendly pages and increase their revenue. The AMP module is especially useful for marketers as it is a great addition to optimize their Drupal SEO efforts.

AMP produces HTML that makes the web a faster place. Implementing the AMP module in Drupal is really simple. Just download, enable and configure!
Before you begin with the integration of AMP module with Drupal, you need -
AMP Module : The AMP module mainly handles the conversion of regular Drupal HTML pages to AMP-complaint pages.

Two main components of AMP module:

AMP Module : The AMP module mainly handles the conversion of regular Drupal HTML pages to AMP-complaint pages.
Two main components of AMP module:

AMP Theme: I'm sure you have come across AMP HTML and its standards. The one that are responsible for your content to look effective and perform well on mobile. The Drupal AMP theme produces the mark up required by these standards for websites looking to perform well in the mobile world. Also, AMP theme allows creation of custom-made AMP pages.

AMP PHP Library: Consisting of the AMP base theme and the ExAMPle sub-theme, the Drupal AMP PHP Library handles the final corrections. Users can also create their own AMP sub-theme from scratch, or modify the default ExAMPle sub-theme for their specific requirements.

How to setup AMP with Drupal?

Before you integrate AMP with Drupal, you need to understand that AMP does not replace your entire website. Instead, at its essence, the AMP module provides a view mode for content types, which is displayed when the browser asks for an AMP version.

Download the AMP Module

With your local prepped up, type the following terminal command:

drush dl amp, amptheme, composer_manager

This command will download the AMP module, the AMP theme and the Composer Manager module (suppose if you do not have the Composer Manager already).

If you have been a user of Drupal 8,  you are probably familiar with Composer and its function as a packaging tool for PHP that installs dependencies for a project. The composer is used to install a PHP library that converts raw HTML into AMP HTML. Also, the composer will help to get that library working with Drupal.

However, as the AMP module does not explicitly require Composer Manager for a dependency, alternate workflows can make use of module Composer files without using Composer Manager.

Next, enable the items that are required to get started:

drush en composer_manager, amptheme, ampsubtheme_example

Before enabling the AMP module itself, an AMP sub-theme needs to be enabled. The default configuration for the AMP module sets the AMP Theme to ‘ExAMPle subtheme.’

How to Enable AMP Module?

The AMP module for Drupal can be enabled using Drush. Once the module is enabled, the Composer Manager will take care of the downloading of the other AMP libraries and its dependencies.

drush en amp

Configuration

Once everything is installed and enabled, AMP needs to be configured using a web interface before the Drupal AMP pages can be displayed. First up, you need to decide which content types should have an AMP version. You might not need it for all of them. Enable particular content type by clicking on the “Enable AMP in Custom Display Settings” link. On the next page, open the “Custom Display Settings” fieldset. Check the AMP box, then click Save.

Setting an AMP Theme

Once the AMP module and content type is configured, it is time to select a theme for AMP pages and configure it. The view modules and the field formatters of the Drupal AMP module take care of the main content of the page. The Drupal AMP theme, on the other hand, changes the mark-up outside the main content area of the page.

Also, the Drupal AMP themes enables you to create custom styles for your AMP pages. On the main AMP config page, make sure that the setting for the AMP theme is set to the ExAMPle Subtheme or the custom AMP subtheme that you created.

One thing is certain. Google favours websites that provide the best experience possible for mobile users. With tough competition from Facebook Instant Articles and Apple News, Google AMP aims to decrease page loading time to influence the user experience. Drupal websites can leverage the AMP module that can produce AMP HTML and substantially speed up web page loading time. Beyond the success of publishers and ecommerce websites on Drupal 8,  a number of other websites are utilizing AMP along with the inclusion of progressive web apps. With a bright future ahead, Google AMP will be one of the strongest tools for traffic generation and conversions.
 

Drupal Planet Shefali ShettyApr 05, 2017 Subscribe For Our Newsletter And Stay Updated Subscribe Shefali ShettyApr 05, 2017 Recent Posts Image AMP It Up! The Why and How of Drupal AMP (And what it can do to your website) Image Setup Responsive Images in Drupal 8 - A Step-by-Step Guide Image Top 15 Drupal Distributions for Accelerating Your Web Development Process Explore Our Drupal Services TAKE ME THERE Featured Success Stories

Know more about our technology driven approach to recreate the content management workflow for [24]7.ai

link

Find out how we transformed the digital image of world’s largest healthcare provider, an attribute that defined their global presence in the medical world.

link

Develop an internal portal aimed at encouraging sellers at Flipkart to obtain latest insights with respect to a particular domain.

link

Jacob Rockowitz: Governments should pay to fix accessibility issues in Drupal and Open Source projects

Mar, 08/13/2019 - 10:03

We need to nudge governments to start funding and fixing accessibility issues in the Open Source projects that are being used to build digital experiences. Most governments are required by law to build accessible websites and applications. Drupal’s commitment to accessibility is why Drupal is used by a lot of governments to create ambitious digital experiences.

Governments have complex budgeting systems and policies, which can make it difficult for them to contribute to Open Source. At the same time, there are many consulting agencies specializing in Drupal for government, and maybe these organizations need to consider fixing accessibility issues on behalf of their clients.

If an agency started contributing, funding, and fixing accessibility issues in Drupal core and Open Source, they’d be showing their government clients that they are indeed experts who understand the importance of getting involved in the Open Source community.

So I have started this blog post with a direct ask for governments to pay to fix accessibility issues without a full explanation as to why. It helps to step back and look at the bigger context: “Why should governments fix accessibility issues in Drupal Core?”

Governments are using Drupal

This summer’s DrupalGovCon in Washington, DC was the largest Drupal event on the East Coast of the United States. The conference was completely free to attend with 1500 registrants. There were dozens of sponsors promoting their Drupal expertise and services. My presentation, Webform for Government, included a section about accessibility. There were also four sessions dedicated to accessibility.

Besides presenting at DrupalGovCon, I...Read More

Agaric Collective: Migrating dates into Drupal

Mar, 08/13/2019 - 04:19

Today we will learn how to migrate dates into Drupal. Depending on your field type and configuration, there are various possible combinations. You can store a single date or a date range. You can store only the date component or also include the time. You might have timezones to take into account. Importing the node creation date requires a slightly different configuration. In addition to the examples, a list of things to consider when migrating dates is also presented.

Getting the code

You can get the full code example at https://github.com/dinarcon/ud_migrations The module to enable is UD date whose machine name is ud_migrations_date. The migration to execute is udm_date. Notice that this migration writes to a content type called UD Date and to three fields: field_ud_date, field_ud_date_range, and field_ud_datetime. This content type and fields will be created when the module is installed. They will also be removed when the module is uninstalled. The module itself depends on the following modules provided by Drupal core: datetime, datetime_range, and migrate.

Note: Configuration placed in a module’s config/install directory will be copied to Drupal’s active configuration. And if those files have a dependencies/enforced/module key, the configuration will be removed when the listed modules are uninstalled. That is how the content type and fields are automatically created.

PHP date format characters

To migrate dates, you need to be familiar with the format characters of the date PHP function. Basically, you need to find a pattern that matches the date format you need to migrate to and from. For example, January 1, 2019 is described by the F j, Y pattern.

As mentioned in the previous post, you need to pay close attention to how you create the pattern. Upper and lowercase letters represent different things like Y and y for the year with four-digits versus two-digits, respectively. Some date components have subtle variations like d and j for the day with or without leading zeros respectively. Also, take into account white spaces and date component separators. If you need to include a literal letter like T it has to be escaped with \T. If the pattern is wrong, an error will be raised, and the migration will fail.

Date format conversions

For date conversions, you use the format_date plugin. You specify a from_format based on your source and a to_format based on what Drupal expects. In both cases, you will use the PHP date function's format characters to assemble the required patterns. Optionally, you can define the from_timezone and to_timezone configurations if conversions are needed. Just like any other migration, you need to understand your source format. The following code snippet shows the source and destination sections:

source: plugin: embedded_data data_rows: - unique_id: 1 node_title: 'Date example 1' node_creation_date: 'January 1, 2019 19:15:30' src_date: '2019/12/1' src_date_end: '2019/12/31' src_datetime: '2019/12/24 19:15:30' destination: plugin: 'entity:node' default_bundle: ud_dateNode creation time migration

The node creation time is migrated using the created entity property. The source column that contains the data is node_creation_date. An example value is January 1, 2019 19:15:30. Drupal expects a UNIX timestamp like 1546370130. The following snippet shows how to do the transformation:

created: plugin: format_date source: node_creation_date from_format: 'F j, Y H:i:s' to_format: 'U' from_timezone: 'UTC' to_timezone: 'UTC'

Following the documentation, F j, Y H:i:s is the from_format and U is the to_format. In the example, it is assumed that the source is provided in UTC. UNIX timestamps are expressed in UTC as well. Therefore, the from_timezone and to_timezone are both set to that value. Even though they are the same, it is important to specify both configurations keys. Otherwise, the from timezone might be picked from your server’s configuration. Refer to the article on user migrations for more details on how to migrate when UNIX timestamps are expected.

Date only migration

The Date module provided by core offers two storage options. You can store the date only, or you can choose to store the date and time. First, let’s consider a date only field. The source column that contains the data is src_date. An example value is '2019/12/1'. Drupal expects date only fields to store data in Y-m-d format like '2019-12-01'. No timezones are involved in migrating this field. The following snippet shows how to do the transformation.

field_ud_date/value: plugin: format_date source: src_date from_format: 'Y/m/j' to_format: 'Y-m-d'Date range migration

The Date Range module provided by Drupal core allows you to have a start and an end date in a single field. The src_date and src_date_end source columns contain the start and end date, respectively. This migration is very similar to date only fields. The difference is that you need to import an extra subfield to store the end date. The following snippet shows how to do the transformation:

field_ud_date_range/value: '@field_ud_date/value' field_ud_date_range/end_value: plugin: format_date source: src_date_end from_format: 'Y/m/j' to_format: 'Y-m-d'

The value subfield stores the start date. The source column used in the example is the same used for the field_ud_date field. Drupal uses the same format internally for date only and date range fields. Considering these two things, it is possible to reuse the field_ud_date mapping to set the start date of the field_ud_date_range field. To do it, you type the name of the previously mapped field in quotes (') and precede it with an at sign (@). Details on this syntax can be found in the blog post about the migrate process pipeline. One important detail is that when field_ud_date was mapped, the value subfield was specified: field_ud_date/value. Because of this, when reusing that mapping, you must also specify the subfield: '@field_ud_date/value'. The end_value subfield stores the end date. The mapping is similar to field_ud_date expect that the source column is src_date_end.

Note: The Date Range module does not come enabled by default. To be able to use it in the example, it is set as a dependency of demo migration module.

Datetime migration

A date and time field stores its value in Y-m-d\TH:i:s format. Note it does not include a timezone. Instead, UTC is assumed by default. In the example, the source column that contains the data is src_datetime. An example value is 2019/12/24 19:15:30. Let’s assume that all dates are provided with a timezone value of America/Managua. The following snippet shows how to do the transformation:

field_ud_datetime/value: plugin: format_date source: src_datetime from_format: 'Y/m/j H:i:s' to_format: 'Y-m-d\TH:i:s' from_timezone: 'America/Managua' to_timezone: 'UTC'

If you need the timezone to be dynamic, things get a bit harder. The from_timezone and to_timezone settings expect a literal value. It is not possible to read a source column to set these configurations. An alternative is that your source column includes timezone information like 2019/12/24 19:15:30 -07:00. In that case, you would need to tweak the from_format to include the timezone component and leave out the from_timezone configuration.

Things to consider

Date migrations can be tricky because they can be affected by things outside of the Migrate API. Here is a non-exhaustive list of things to consider:

  • For date and time fields, the transformation might be affected by your server’s timezone if you do not manually set the from_timezone configuration.
  • People might see the date and time according to the preferences in their user profile. That is, two users might see a different value for the same migrated field if their preferred timezones are not the same.
  • For date only fields, the user might see a time depending on the format used to display them. A list of available formats can be found at /admin/config/regional/date-time.
  • A field can always be configured to be presented in a specific timezone. This would override the site’s timezone and the user’s preferred timezone.

What did you learn in today’s blog post? Did you know that entity properties and date fields expect different destination formats? Did you know how to do timezone conversions? What challenges have you found when migrating dates and times? Please share your answers in the comments. Also, I would be grateful if you shared this blog post with others.

This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors. Contact Understand Drupal if your organization would like to support this documentation project, whether it is the migration series or other topics.

Read more and discuss at agaric.coop.

Spinning Code: Drupal Tome + Docksal + Netlify

Lun, 08/12/2019 - 21:00

Drupal Tome is a static site generator distribution of Drupal 8. It provides mechanisms for taking an entire Drupal site and exporting all the content to HTML for direct service. As part of a recent competition at SCDUG to come up with the cheapest possible Drupal 8 hosting, I decided to do a proof-of-concept level implementation of Drupal 8 with Docksal for local content editing, and Netlify for hosting (total cost was just the domain registration).

The Tome project has directions for setup with Docker, and for setup with Netlify, but they don’t quite line up with each other (I followed the docker instructions, then the Netlify set, but had to chart my own course to get the site from the first project linked to the repo in the second), and since I’m getting used to using Docksal when I had to fall back and do a bit of it myself I realized it was almost painfully easy to setup.

The first step was to go to the Tome documentation for Netlify and setup an account, and site from the template. There is a button in those directions to trigger the Netlify setup, I’ve added one here as well (but if this one fails, check to see if they updated theirs):

Login with Github or similar service, and let it create a repo for your project.

Follow Netlify’s directions for setting up DNS so you can have the domain you want, and HTTPS (through Let’s Encrypt). It took it a couple hours to get that detail to run right, but it eventually worked. For this project I chose a subdomain of my main blog domain: tome-netlify.spinningcode.org

Next go to Github (or whatever service you used) and clone the repository to your local machine. There is a generated README on that project, but the directions aren’t 100% correct if you aren’t cloning onto a machine with a working PHP environment. This is when I switched over to docksal, and ran the following series of commands:

fin init fin composer install fin drush tome:install fin drush uli

Then log into your local site using the domain from docksal and the link from drush, and add some content.

Next we export the content from Drupal to send over to Netlify for deployment.

fin drush tome:static git add . git commit -m "Adding sample content" git push

…now we wait while Netlify notices and builds the site…

If you look at the site a few minutes later the new content should be posted.

This is all well and good if I want to use the version of the site generated for the Netlify example, but I wanted to make sure I could do something more interesting. These days Drupal ships with an install profile called Unami that provides a more robust sample site than the more traditional Standard install.

So now let’s try to get Unami onto this site. Go back to the terminal and have Tome reset everything (it’ll warn you that you are about to nuke everything):

fin drush tome:init

…select Unami when it asks for a profile…and wait cause this takes a while…

Now just re-export the content and push it to your repo.

fin drush tome:static git add . git commit -m "Converting to Unami" git push

And wait again, cause this also takes a few minutes…

The Unami home page on my subdomain hosted at Netlify.

That really was all that was involved for a simple site, you can see my repository on Github if you want to see all of what was generated along the way.

The whole process is pretty straight forward, but there are a few things that it helps to understand.

First, Netlify is actually regenerating the markup on their servers with this approach. The Drupal nodes, and other entities, are saved as JSON and then imported during the build. This makes the process reliable, but slow. Unami takes several minutes to deploy since Netlify is installing and configuring Drupal, loading the content, and generating the output. The build command provided in that template is clear enough to follow if you are familiar with composer projects:

command = "composer install && ./vendor/bin/drush tome:install -y && ./vendor/bin/drush tome:static -l $DEPLOY_PRIME_URL"

One upside of this, is that you can use a totally unrelated domain for your local testing and have it adjust correctly to the production domain. When you are using Netlify’s branching workflow for managing dev, test, and production it also protects your work that way.

My directions above load a standard docksal container because that’s quick and easy, which includes MySQL, but Tome falls back to using a Sqlite database since you can be more confident it is there. Again this is reliable but slow. If I were going to do this on a more complete project I’d want a smaller Docksal setup or to switch to using MySQL locally.

A workflow based on this approach might also struggle with concurrent edits or complex configuration of large sites. It would probably make more sense to have the content created on a hidden, but traditional, server and then run through a different workflow. But for someone working on a series small sites that are rarely updated, a totally temporary instance of the site that can be rapidly deployed to a device, have content updated, push out to production, and then deleted locally until needed again.

The final detail to note is that there is no support for forms built into this solution. Netlify has support for that, and Tome has a module that claim to connect to that service but I wasn’t able to quickly determine how to get it connected. I am confident there are solves to this problem, but it is something that would take a little additional work.

DEV :: Drupal, Skepticism and Spaceships...: Why is my entity untranslatable?

Lun, 08/12/2019 - 20:49
Why is my entity untranslatable? Unifex Tue, 08/13/2019 - 11:49

Drupal has pretty good multilingual support out of the box. It's also fairly easy to create new entities and just add translation support through the annotation. These things are well documented elsewhere and a quick search will reveal how to do that. That is not what this post is about. This post is about the UX around selecting which fields are translatable.

On the Content Language page at http://example.com/admin/config/regional/content-language you can select which fields on your nodes, entities and various other translatable elements will be available on non-default language edit pages. The section at the top is the list of types of translatable things. Checking these boxen will reveal the related section. You can then go down to that section and start selecting fields to translate, save the form and they become available. All nice and easy.

I came into the current project late and this is my first exposure to this area of Drupal. We have a few content types and a lot of entities. I was ticking the box for the entity I wanted to add, jumping to the end of the form and saving it. When the form came back though it was not selected. I could not figure out why. It wasn't until a co-worker used the form differently to me that the issue was resolved. Greg ticked the entity, scrolled down the page and found it, ticked some of the checkboxen in the entity itself and then saved the page. The checkbox was still ticked.

The UX on this pretty good once you know how it works. It could be fixed fairly easy with a system message pointing out that your checkbox was not saved because none of the items it exposed were selected.

I feel a patch coming on…

Drupal 8 Planet Drupal

Dries Buytaert: Increasing Drupal speaker diversity

Lun, 08/12/2019 - 16:44

At Drupalcon Seattle, I spoke about some of the challenges Open Source communities like Drupal often have with increasing contributor diversity. We want our contributor base to look like everyone in the world who uses Drupal's technology on the internet, and unfortunately, that is not quite the reality today.

One way to step up is to help more people from underrepresented groups speak at Drupal conferences and workshops. Seeing and hearing from a more diverse group of people can inspire new contributors from all races, ethnicities, gender identities, geographies, religious groups, and more.

To help with this effort, the Drupal Diversity and Inclusion group is hosting a speaker diversity training workshop on September 21 and 28 with Jill Binder, whose expertise has also driven major speaker diversity improvements within the WordPress community.

I'd encourage you to either sign up for this session yourself or send the information to someone in a marginalized group who has knowledge to share, but may be hesitant to speak up. Helping someone see that their expertise is valuable is the kind of support we need to drive meaningful change.

Promet Source: Driving Human Connections in a Digital Environment

Lun, 08/12/2019 - 12:57
Like many companies in our technology-enabled, globally connected environment, Promet Source operates with clients and team members all over the world. This reality creates a challenge for communications. The truth is, the more we put into our interactions, the more we get out of them.

Electric Citizen: Mastering Drupal 8 Multilingual: Part 2 of 3

Lun, 08/12/2019 - 06:08

In Mastering Drupal 8 Multilingual: Part 1 of 3, we focused on planning for Drupal 8 multilingual and its impact on a project's timeline and budget.

In Part 2 (below), we cover everything you need to know to have a functioning multilingual site with no custom code. Part 3 of the series covers more advanced techniques for site builders and front-end developers.

Lullabot: Behind the Screens: Behind the Screens with Sean Dietrich

Lun, 08/12/2019 - 04:00

Docksal co-maintainer and BADCamp co-organizer, Sean Dietrich, talks about what it takes to run a camp website, why he became a Docksal co-maintainer, and why we could all use a little more time.

Agaric Collective: Migrating users into Drupal - Part 2

Dom, 08/11/2019 - 19:59

Today we complete the user migration example. In the previous post, we covered how to migrate email, timezone, username, password, and status. This time, we cover creation date, roles, and profile pictures. The source, destination, and dependencies configurations were explained already. Therefore, we are jumping straight to the process transformations in this entry.

Getting the code

You can get the full code example at https://github.com/dinarcon/ud_migrations The module to enable is UD users whose machine name is ud_migrations_users. The two migrations to execute are udm_user_pictures and udm_users. Notice that both migrations belong to the same module. Refer to this article to learn where the module should be placed.

The example assumes Drupal was installed using the standard installation profile. Particularly, we depend on a Picture (user_picture) image field attached to the user entity. The word in parenthesis represents the machine name of the image field.

The explanation below is only for the user migration. It depends on a file migration to get the profile pictures. One motivation to have two migrations is for the images to be deleted if the file migration is rolled back. Note that other techniques exist for migrating images without having to create a separate migration. We have covered two of them in the articles about subfields and constants and pseudofields.

Migrating user creation date

Have a look at the previous post for details on the source values. For reference, the user creation time is provided by the member_since column, and one of the values is April 4, 2014. The following snippet shows how the various user date related properties are set:

created: plugin: format_date source: member_since from_format: 'F j, Y' to_format: 'U' changed: '@created' access: '@created' login: '@created'

The created, entity property stores a UNIX timestamp of when the user was added to Drupal. The value itself is an integer number representing the number of seconds since the epoch. For example, 280299600 represents Sun, 19 Nov 1978 05:00:00 GMT. Kudos to the readers who knew this is Drupal's default expire HTTP header. Bonus points if you knew it was chosen in honor of someone’s birthdate. ;-)

Back to the migration, you need to transform the provided date from Month day, year format to a UNIX timestamp. To do this, you use the format_date plugin. The from_format is set to F j, Y which means your source date consists of:

  • The full textual representation of a month: April.
  • Followed by a space character.
  • Followed by the day of the month without leading zeros: 4.
  • Followed by a comma and another space character.
  • Followed by the full numeric representation of a year using four digits: 2014

If the value of from_format does not make sense, you are not alone. It is actually assembled from format characters of the date PHP function. When you need to specify the from and to formats, you basically need to look at the documentation and assemble a string that matches the desired date format. You need to pay close attention because upper and lowercase letters represent different things like Y and y for the year with four-digits versus two-digits respectively. Some date components have subtle variations like d and j for the day with or without leading zeros respectively. Also, take into account white spaces and date component separators. To finish the plugin configuration, you need to set the to_format configuration to something that produces a UNIX timestamp. If you look again at the documentation, you will see that U does the job.

The changed, access, and login entity properties are also dates in UNIX timestamp format. changed indicates when the user account was last updated. access indicates when the user last accessed the site. login indicated when the user last logged in. For brevity, the same value assigned to created is also assigned to these three entity properties. The at sign (@) means copy the value of a previous mapping in the process pipeline. If needed, each property can be set to a different value or left unassigned. None is actually required.

Migrating user roles

For reference, the roles are provided by the user_roles column, and one of the values is forum moderator, forum admin. It is a comma separated list of roles from the legacy system which need to be mapped to Drupal roles. It is possible that the user_roles column is not provided at all in the source. The following snippet shows how the roles are set:

roles: - plugin: skip_on_empty method: process source: user_roles - plugin: explode delimiter: ',' - plugin: callback callable: trim - plugin: static_map map: 'forum admin': administrator 'webmaster': administrator default_value: null

First, the skip_on_empty plugin is used to skip the processing of the roles if the source column is missing. Then, the explode plugin is used to break the list into an array of strings representing the roles. Next, the callback plugin invokes the trim PHP function to remove any leading or trailing whitespace from the role names. Finally, the static_map plugin is used to manually map values from the legacy system to Drupal roles. All of these plugins have been explained previously. Refer to other articles in the series or the plugin documentation for details on how to use and configure them.

There are some things that are worth mentioning about migrating roles using this particular process pipeline. If the comma separated list includes spaces before or after the role name, you need to trim the value because the static map will perform an equality check. Having extraneous space characters will produce a mismatch.

Also, you do not need to map the anonymous or authenticated roles. Drupal users are assumed to be authenticated and cannot be anonymous. Any other role needs to be mapped manually to its machine name. You can find the machine name of any role in its edit page. In the example, only two out of four roles are mapped. Any role that is not found in the static map will be assigned the value null as indicated in the default_value configuration. After processing the null value will be ignored, and no role will be assigned. But you could use this feature to assign a default role in case the static map does not produce a match.

Migrating profile pictures

For reference, the profile picture is provided by the user_photo column, and one of the values is P01. This value corresponds to the unique identifier of one record in the udm_user_pictures file migration, which is part of the same demo module.  It is important to note that the user_picture field is not a user entity property. The field is created by the standard installation profile and attached to the user entity. You can find its configuration in the “Manage fields” tab of the “Account settings” configuration page at /admin/config/people/accounts. The following snippet shows how profile pictures are set:

user_picture/target_id: plugin: migration_lookup migration: udm_user_pictures source: user_photo

Image fields are entity references. Their target_id property needs to be an integer number containing the file id (fid) of the image. This can be obtained using the migration_lookup plugin. Details on how to configure it can be found in this article. You could simply use user_picture as your field mapping because target_id is the default subfield and could be omitted. Also note that the alt subfield is not mapped. If present, its value will be used for the alternative text of the image. But if it is not specified, like in this example, Drupal will automatically generate an alternative text out of the username. An example value would be: Profile picture for user michele.

Technical note: The user entity contains other properties you can write to. For a list of available options, check the baseFieldDefinitions() method of the User class defining the entity. Note that more properties can be available up in the class hierarchy.

And with that, we wrap up the user migration example. We covered how to migrate a user’s mail, timezone, username, password, status, creation date, roles, and profile picture. Along the way, we presented various process plugins that had not been used previously in the series. We showed a couple of examples of process plugin chaining to make sure the migrated data is valid and in the format expected by Drupal.

What did you learn in today’s blog post? Did you know how to process dates for user entity properties? Have you migrated user roles before? Did you know how to import profile pictures? Please share your answers in the comments. Also, I would be grateful if you shared this blog post with others.

This blog post series is made possible thanks to these generous sponsors. Contact us if your organization would like to support this documentation project, whether the migration series or other topics.

Read more and discuss at agaric.coop.

Agaric Collective: Migrating users into Drupal - Part 1

Sáb, 08/10/2019 - 17:20

Today we are going to learn how to migrate users into Drupal. The example code will be explained in two blog posts. In this one, we cover the migration of email, timezone, username, password, and status. In the next one, we will cover creation date, roles, and profile pictures. Several techniques will be implemented to ensure that the migrated data is valid. For example, making sure that usernames are not duplicated.

Although the example is standalone, we will build on many of the concepts that had already been covered in the series. For instance, a file migration is included to import images used as profile pictures. This topic has been explained in detail in a previous post, and the example code is pretty similar. Therefore, no explanation is provided about the file migration to keep the focus on the user migration. Feel free to read other posts in the series if you need a refresher.

Getting the code

You can get the full code example at https://github.com/dinarcon/ud_migrations The module to enable is UD users whose machine name is ud_migrations_users. The two migrations to execute are udm_user_pictures and udm_users. Notice that both migrations belong to the same module. Refer to this article to learn where the module should be placed.

The example assumes Drupal was installed using the standard installation profile. Particularly, we depend on a Picture (user_picture) image field attached to the user entity. The word in parenthesis represents the machine name of the image field.

The explanation below is only for the user migration. It depends on a file migration to get the profile pictures. One motivation to have two migrations is for the images to be deleted if the file migration is rolled back. Note that other techniques exist for migrating images without having to create a separate migration. We have covered two of them in the articles about subfields and constants and pseudofields.

Understanding the source

It is very important to understand the format of your source data. This will guide the transformation process required to produce the expected destination format. For this example, it is assumed that the legacy system from which users are being imported did not have unique usernames. Emails were used to uniquely identify users, but that is not desired in the new Drupal site. Instead, a username will be created from a public_name source column. Special measures will be taken to prevent duplication as Drupal usernames must be unique. Two more things to consider. First, source passwords are provided in plain text (never do this!). Second, some elements might be missing in the source like roles and profile picture. The following snippet shows a sample record for the source section:

source: plugin: embedded_data data_rows: - legacy_id: 101 public_name: 'Michele' user_email: 'micky@example.com' timezone: 'America/New_York' user_password: 'totally insecure password 1' user_status: 'active' member_since: 'January 1, 2011' user_roles: 'forum moderator, forum admin' user_photo: 'P01' ids: legacy_id: type: integerConfiguring the destination and dependencies

The destination section specifies that user is the target entity. When that is the case, you can set an optional md5_passwords configuration. If it is set to true, the system will take an MD5 hashed password and convert it to the encryption algorithm that Drupal uses. For more information password migrations refer to these articles for basic and advanced use cases. To migrate the profile pictures, a separate migration is created. The dependency of user on file is added explicitly. Refer to these articles more information on migrating images and files and setting dependencies. The following code snippet shows how the destination and dependencies are set:

destination: plugin: 'entity:user' md5_passwords: true migration_dependencies: required: - udm_user_pictures optional: []Processing the fields

The interesting part of a user migration is the field mapping. The specific transformation will depend on your source, but some arguably complex cases will be addressed in the example. Let’s start with the basics: verbatim copies from source to destination. The following snippet shows three mappings:

mail: user_email init: user_email timezone: user_timezone

The mail, init, and timezone entity properties are copied directly from the source. Both mail and init are email addresses. The difference is that mail stores the current email, while init stores the one used when the account was first created. The former might change if the user updates its profile, while the latter will never change. The timezone needs to be a string taken from a specific set of values. Refer to this page for a list of supported timezones.

name: - plugin: machine_name source: public_name - plugin: make_unique_entity_field entity_type: user field: name postfix: _

The name, entity property stores the username. This has to be unique in the system. If the source data contained a unique value for each record, it could be used to set the username. None of the unique source columns (eg., legacy_id) is suitable to be used as username. Therefore, extra processing is needed. The machine_name plugin converts the public_name source column into transliterated string with some restrictions: any character that is not a number or letter will be converted to an underscore. The transformed value is sent to the make_unique_entity_field. This plugin makes sure its input value is not repeated in the whole system for a particular entity field. In this example, the username will be unique. The plugin is configured indicating which entity type and field (property) you want to check. If an equal value already exists, a new one is created appending what you define as postfix plus a number. In this example, there are two records with public_name set to Benjamin. Eventually, the usernames produced by running the process plugins chain will be: benjamin and benjamin_1.

process: pass: plugin: callback callable: md5 source: user_password destination: plugin: 'entity:user' md5_passwords: true

The pass, entity property stores the user’s password. In this example, the source provides the passwords in plain text. Needless to say, that is a terrible idea. But let’s work with it for now. Drupal uses portable PHP password hashes implemented by PhpassHashedPassword. Understanding the details of how Drupal converts one algorithm to another will be left as an exercise for the curious reader. In this example, we are going to take advantage of a feature provided by the migrate API to automatically convert MD5 hashes to the algorithm used by Drupal. The callback plugin is configured to use the md5 PHP function to convert the plain text password into a hashed version. The last part of the puzzle is set, in the process section, the md5_passwords configuration to true. This will take care of converting the already md5-hashed password to the value expected by Drupal.

Note: MD5-hash passwords are insecure. In the example, the password is encrypted with MD5 as an intermediate step only. Drupal uses other algorithms to store passwords securely.

status: plugin: static_map source: user_status map: inactive: 0 active: 1

The status, entity property stores whether a user is active or blocked from the system. The source user_status values are strings, but Drupal stores this data as a boolean. A value of zero (0) indicates that the user is blocked while a value of one (1) indicates that it is active. The static_map plugin is used to manually map the values from source to destination. This plugin expects a map configuration containing an array of key-value mappings. The value from the source is on the left. The value expected by Drupal is on the right.

Technical note: Booleans are true or false values. Even though Drupal treats the status property as a boolean, it is internally stored as a tiny int in the database. That is why the numbers zero or one are used in the example. For this particular case, using a number or a boolean value on the right side of the mapping produces the same result.

In the next blog post, we will continue with the user migration. Particularly, we will explain how to migrate the user creation time, roles, and profile pictures.

What did you learn in today’s blog post? Have you migrated user passwords before, either in plain text or hashed? Did you know how to prevent duplicates for values that need to be unique in the system? Were you aware of the plugin that allows you to manually map values from source to destination? Please share your answers in the comments. Also, I would be grateful if you shared this blog post with others.

This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors. Contact Understand Drupal if your organization would like to support this documentation project, whether it is the migration series or other topics.

Read more and discuss at agaric.coop.

Páginas